Sample records for simple theoretical analysis

  1. A theoretical analysis of steady-state photocurrents in simple silicon diodes

    NASA Technical Reports Server (NTRS)

    Edmonds, L.

    1995-01-01

    A theoretical analysis solves for the steady-state photocurrents produced by a given photo-generation rate function with negligible recombination in simple silicon diodes, consisting of a uniformly doped quasi-neutral region (called 'substrate' below) adjacent to a p-n junction depletion region (DR). Special attention is given to conditions that produce 'funneling' (a term used by the single-eventeffects community) under steady-state conditions. Funneling occurs when carriers are generated so fast that the DR becomes flooded and partially or completely collapses. Some or nearly all of the applied voltage, plus built-in potential normally across the DR, is now across the substrate. This substrate voltage drop affects substrate currents. The steady-state problem can provide some qualitative insights into the more difficult transient problem. First, it was found that funneling can be induced from a distance, i.e., from carriers generated at locations outside of the DR. Secondly, it was found that the substrate can divide into two subregions, with one controlling substrate resistance and the other characterized by ambipolar diffusion. Finally, funneling was found to be more difficult to induce in the p(sup +)/n diode than in the n(sup +)/p diode. The carrier density exceeding the doping density in the substrate and at the DR boundary is not a sufficient condition to collapse a DR.

  2. A simple theoretical model for ⁶³Ni betavoltaic battery.

    PubMed

    Zuo, Guoping; Zhou, Jianliang; Ke, Guotu

    2013-12-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for ⁶³Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for ⁶³Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to ⁶³Ni and ¹⁴⁷Pm betavoltaic batteries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Theoretical Analysis of a Pulse Tube Regenerator

    NASA Technical Reports Server (NTRS)

    Roach, Pat R.; Kashani, Ali; Lee, J. M.; Cheng, Pearl L. (Technical Monitor)

    1995-01-01

    A theoretical analysis of the behavior of a typical pulse tube regenerator has been carried out. Assuming simple sinusoidal oscillations, the static and oscillatory pressures, velocities and temperatures have been determined for a model that includes a compressible gas and imperfect thermal contact between the gas and the regenerator matrix. For realistic material parameters, the analysis reveals that the pressure and, velocity oscillations are largely independent of details of the thermal contact between the gas and the solid matrix. Only the temperature oscillations depend on this contact. Suggestions for optimizing the design of a regenerator are given.

  4. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  5. Simple control-theoretic models of human steering activity in visually guided vehicle control

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1991-01-01

    A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.

  6. Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.

    We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.

  7. A simple white noise analysis of neuronal light responses.

    PubMed

    Chichilnisky, E J

    2001-05-01

    A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.

  8. Pure shear and simple shear calcite textures. Comparison of experimental, theoretical and natural data

    USGS Publications Warehouse

    Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.

    1987-01-01

    The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.

  9. Information theoretic analysis of canny edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  10. Simple Numerical Analysis of Longboard Speedometer Data

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2013-01-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

  11. Theoretical and Experimental Investigation of Random Gust Loads Part I : Aerodynamic Transfer Function of a Simple Wing Configuration in Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Hakkinen, Raimo J; Richardson, A S , Jr

    1957-01-01

    Sinusoidally oscillating downwash and lift produced on a simple rigid airfoil were measured and compared with calculated values. Statistically stationary random downwash and the corresponding lift on a simple rigid airfoil were also measured and the transfer functions between their power spectra determined. The random experimental values are compared with theoretically approximated values. Limitations of the experimental technique and the need for more extensive experimental data are discussed.

  12. Theoretical analysis of Lumry-Eyring models in differential scanning calorimetry

    PubMed Central

    Sanchez-Ruiz, Jose M.

    1992-01-01

    A theoretical analysis of several protein denaturation models (Lumry-Eyring models) that include a rate-limited step leading to an irreversibly denatured state of the protein (the final state) has been carried out. The differential scanning calorimetry transitions predicted for these models can be broadly classified into four groups: situations A, B, C, and C′. (A) The transition is calorimetrically irreversible but the rate-limited, irreversible step takes place with significant rate only at temperatures slightly above those corresponding to the transition. Equilibrium thermodynamics analysis is permissible. (B) The transition is distorted by the occurrence of the rate-limited step; nevertheless, it contains thermodynamic information about the reversible unfolding of the protein, which could be obtained upon the appropriate data treatment. (C) The heat absorption is entirely determined by the kinetics of formation of the final state and no thermodynamic information can be extracted from the calorimetric transition; the rate-determining step is the irreversible process itself. (C′) same as C, but, in this case, the rate-determining step is a previous step in the unfolding pathway. It is shown that ligand and protein concentration effects on transitions corresponding to situation C (strongly rate-limited transitions) are similar to those predicted by equilibrium thermodynamics for simple reversible unfolding models. It has been widely held in recent literature that experimentally observed ligand and protein concentration effects support the applicability of equilibrium thermodynamics to irreversible protein denaturation. The theoretical analysis reported here disfavors this claim. PMID:19431826

  13. Isolation of exosomes by differential centrifugation: Theoretical analysis of a commonly used protocol

    NASA Astrophysics Data System (ADS)

    Livshts, Mikhail A.; Khomyakova, Elena; Evtushenko, Evgeniy G.; Lazarev, Vassili N.; Kulemin, Nikolay A.; Semina, Svetlana E.; Generozov, Edward V.; Govorun, Vadim M.

    2015-11-01

    Exosomes, small (40-100 nm) extracellular membranous vesicles, attract enormous research interest because they are carriers of disease markers and a prospective delivery system for therapeutic agents. Differential centrifugation, the prevalent method of exosome isolation, frequently produces dissimilar and improper results because of the faulty practice of using a common centrifugation protocol with different rotors. Moreover, as recommended by suppliers, adjusting the centrifugation duration according to rotor K-factors does not work for “fixed-angle” rotors. For both types of rotors - “swinging bucket” and “fixed-angle” - we express the theoretically expected proportion of pelleted vesicles of a given size and the “cut-off” size of completely sedimented vesicles as dependent on the centrifugation force and duration and the sedimentation path-lengths. The proper centrifugation conditions can be selected using relatively simple theoretical estimates of the “cut-off” sizes of vesicles. Experimental verification on exosomes isolated from HT29 cell culture supernatant confirmed the main theoretical statements. Measured by the nanoparticle tracking analysis (NTA) technique, the concentration and size distribution of the vesicles after centrifugation agree with those theoretically expected. To simplify this “cut-off”-size-based adjustment of centrifugation protocol for any rotor, we developed a web-calculator.

  14. Analysis of Implicit Uncertain Systems. Part 1: Theoretical Framework

    DTIC Science & Technology

    1994-12-07

    Analysis of Implicit Uncertain Systems Part I: Theoretical Framework Fernando Paganini * John Doyle 1 December 7, 1994 Abst rac t This paper...Analysis of Implicit Uncertain Systems Part I: Theoretical Framework 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...model and a number of constraints relevant to the analysis problem under consideration. In Part I of this paper we propose a theoretical framework which

  15. Simple yet Hidden Counterexamples in Undergraduate Real Analysis

    ERIC Educational Resources Information Center

    Shipman, Barbara A.; Shipman, Patrick D.

    2013-01-01

    We study situations in introductory analysis in which students affirmed false statements as true, despite simple counterexamples that they easily recognized afterwards. The study draws attention to how simple counterexamples can become hidden in plain sight, even in an active learning atmosphere where students proposed simple (as well as more…

  16. A Simple Plant Growth Analysis.

    ERIC Educational Resources Information Center

    Oxlade, E.

    1985-01-01

    Describes the analysis of dandelion peduncle growth based on peduncle length, epidermal cell dimensions, and fresh/dry mass. Methods are simple and require no special apparatus or materials. Suggests that limited practical work in this area may contribute to students' lack of knowledge on plant growth. (Author/DH)

  17. Composition and Comprehension of Simple Texts. Final Report.

    ERIC Educational Resources Information Center

    Olson, Gary M.

    This report describes research that focused on the comprehension and composition of simple texts. The first section reviews the overall goals and theoretical perspectives of the project. The second section describes the following studies carried out during the project: analysis and extension of prior thinking-out-loud (TOL) data, TOL and reading…

  18. Physical Violence between Siblings: A Theoretical and Empirical Analysis

    ERIC Educational Resources Information Center

    Hoffman, Kristi L.; Kiecolt, K. Jill; Edwards, John N.

    2005-01-01

    This study develops and tests a theoretical model to explain sibling violence based on the feminist, conflict, and social learning theoretical perspectives and research in psychology and sociology. A multivariate analysis of data from 651 young adults generally supports hypotheses from all three theoretical perspectives. Males with brothers have…

  19. Fourier Spectroscopy: A Simple Analysis Technique

    ERIC Educational Resources Information Center

    Oelfke, William C.

    1975-01-01

    Presents a simple method of analysis in which the student can integrate, point by point, any interferogram to obtain its Fourier transform. The manual technique requires no special equipment and is based on relationships that most undergraduate physics students can derive from the Fourier integral equations. (Author/MLH)

  20. [The properties of simple medicines according to Avicenna (980-1037): analysis of some sections of the Canon].

    PubMed

    Ayari-Lassueur, Sylvie

    2012-01-01

    Avicenna spoke on pharmacology in several works, and this article considers his discussions in the Canon, a vast synthesis of the greco-arabian medicine of his time. More precisely, it focuses on book II, which treats simple medicines. This text makes evident that the Persian physician's central preoccupation was the efficacy of the treatment, since it concentrates on the properties of medicines. In this context, the article examines their different classifications and related topics, such as the notion of temperament, central to Avicenna's thought, and the concrete effects medicines have on the body. Yet, these theoretical notions only have sense in practical application. For Avicenna, medicine is both a theoretical and a practical science. For this reason, the second book of the Canon ends with an imposing pharmacopoeia, where the properties described theoretically at the beginning of the book appear in the list of simple medicines, so that the physician can select them according to the intended treatment's goals. The article analyzes a plant from this pharmacopoeia as an example of this practical application, making evident the logic Avicenna uses in detailing the different properties of each simple medicine.

  1. Game theoretic analysis of physical protection system design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canion, B.; Schneider, E.; Bickel, E.

    The physical protection system (PPS) of a fictional small modular reactor (SMR) facility have been modeled as a platform for a game theoretic approach to security decision analysis. To demonstrate the game theoretic approach, a rational adversary with complete knowledge of the facility has been modeled attempting a sabotage attack. The adversary adjusts his decisions in response to investments made by the defender to enhance the security measures. This can lead to a conservative physical protection system design. Since defender upgrades were limited by a budget, cost benefit analysis may be conducted upon security upgrades. One approach to cost benefitmore » analysis is the efficient frontier, which depicts the reduction in expected consequence per incremental increase in the security budget.« less

  2. Tube Bulge Process : Theoretical Analysis and Finite Element Simulations

    NASA Astrophysics Data System (ADS)

    Velasco, Raphael; Boudeau, Nathalie

    2007-05-01

    This paper is focused on the determination of mechanics characteristics for tubular materials, using tube bulge process. A comparative study is made between two different models: theoretical model and finite element analysis. The theoretical model is completely developed, based first on a geometrical analysis of the tube profile during bulging, which is assumed to strain in arc of circles. Strain and stress analysis complete the theoretical model, which allows to evaluate tube thickness and state of stress, at any point of the free bulge region. Free bulging of a 304L stainless steel is simulated using Ls-Dyna 970. To validate FE simulations approach, a comparison between theoretical and finite elements models is led on several parameters such as: thickness variation at the free bulge region pole with bulge height, tube thickness variation with z axial coordinate, and von Mises stress variation with plastic strain. Finally, the influence of geometrical parameters deviations on flow stress curve is observed using analytical model: deviations of the tube outer diameter, its initial thickness and the bulge height measurement are taken into account to obtain a resulting error on plastic strain and von Mises stress.

  3. The analysis of non-linear dynamic behavior (including snap-through) of postbuckled plates by simple analytical solution

    NASA Technical Reports Server (NTRS)

    Ng, C. F.

    1988-01-01

    Static postbuckling and nonlinear dynamic analysis of plates are usually accomplished by multimode analyses, although the methods are complicated and do not give straightforward understanding of the nonlinear behavior. Assuming single-mode transverse displacement, a simple formula is derived for the transverse load displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the static postbuckling displacement and nonlinear dynamic responses of postbuckled plates under sinusoidal or random excitation. Regions with softening and hardening spring behavior are identified. Also, the highly nonlinear motion of snap-through and its effects on the overall dynamic response can be easily interpreted using the single-mode formula. Theoretical results are compared with experimental results obtained using a buckled aluminum panel, using discrete frequency and broadband point excitation. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are found.

  4. On the Correct Analysis of the Foundations of Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2007-04-01

    The problem of truth in science -- the most urgent problem of our time -- is discussed. The correct theoretical analysis of the foundations of theoretical physics is proposed. The principle of the unity of formal logic and rational dialectics is a methodological basis of the analysis. The main result is as follows: the generally accepted foundations of theoretical physics (i.e. Newtonian mechanics, Maxwell electrodynamics, thermodynamics, statistical physics and physical kinetics, the theory of relativity, quantum mechanics) contain the set of logical errors. These errors are explained by existence of the global cause: the errors are a collateral and inevitable result of the inductive way of cognition of the Nature, i.e. result of movement from formation of separate concepts to formation of the system of concepts. Consequently, theoretical physics enters the greatest crisis. It means that physics as a science of phenomenon leaves the progress stage for a science of essence (information). Acknowledgment: The books ``Surprises in Theoretical Physics'' (1979) and ``More Surprises in Theoretical Physics'' (1991) by Sir Rudolf Peierls stimulated my 25-year work.

  5. Monosodium glutamate for simple photometric iron analysis

    NASA Astrophysics Data System (ADS)

    Prasetyo, E.

    2018-01-01

    Simple photometric method for iron analysis using monosodium glutamate (MSG) was proposed. The method could be used as an alternative method, which was technically simple, economic, quantitative, readily available, scientifically sound and environmental friendly. Rapid reaction of iron (III) with glutamate in sodium chloride-hydrochloric acid buffer (pH 2) to form red-brown complex was served as a basis in the photometric determination, which obeyed the range of iron (III) concentration 1.6 - 80 µg/ml. This method could be applied to determine iron concentration in soil with satisfactory results (accuracy and precision) compared to other photometric and atomic absorption spectrometry results.

  6. Simple theoretical models for composite rotor blades

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  7. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  8. Physical context for theoretical approaches to sediment transport magnitude-frequency analysis in alluvial channels

    NASA Astrophysics Data System (ADS)

    Sholtes, Joel; Werbylo, Kevin; Bledsoe, Brian

    2014-10-01

    Theoretical approaches to magnitude-frequency analysis (MFA) of sediment transport in channels couple continuous flow probability density functions (PDFs) with power law flow-sediment transport relations (rating curves) to produce closed-form equations relating MFA metrics such as the effective discharge, Qeff, and fraction of sediment transported by discharges greater than Qeff, f+, to statistical moments of the flow PDF and rating curve parameters. These approaches have proven useful in understanding the theoretical drivers behind the magnitude and frequency of sediment transport. However, some of their basic assumptions and findings may not apply to natural rivers and streams with more complex flow-sediment transport relationships or management and design scenarios, which have finite time horizons. We use simple numerical experiments to test the validity of theoretical MFA approaches in predicting the magnitude and frequency of sediment transport. Median values of Qeff and f+ generated from repeated, synthetic, finite flow series diverge from those produced with theoretical approaches using the same underlying flow PDF. The closed-form relation for f+ is a monotonically increasing function of flow variance. However, using finite flow series, we find that f+ increases with flow variance to a threshold that increases with flow record length. By introducing a sediment entrainment threshold, we present a physical mechanism for the observed diverging relationship between Qeff and flow variance in fine and coarse-bed channels. Our work shows that through complex and threshold-driven relationships sediment transport mode, channel morphology, flow variance, and flow record length all interact to influence estimates of what flow frequencies are most responsible for transporting sediment in alluvial channels.

  9. A simple theoretical framework for understanding heterogeneous differentiation of CD4+ T cells

    PubMed Central

    2012-01-01

    Background CD4+ T cells have several subsets of functional phenotypes, which play critical yet diverse roles in the immune system. Pathogen-driven differentiation of these subsets of cells is often heterogeneous in terms of the induced phenotypic diversity. In vitro recapitulation of heterogeneous differentiation under homogeneous experimental conditions indicates some highly regulated mechanisms by which multiple phenotypes of CD4+ T cells can be generated from a single population of naïve CD4+ T cells. Therefore, conceptual understanding of induced heterogeneous differentiation will shed light on the mechanisms controlling the response of populations of CD4+ T cells under physiological conditions. Results We present a simple theoretical framework to show how heterogeneous differentiation in a two-master-regulator paradigm can be governed by a signaling network motif common to all subsets of CD4+ T cells. With this motif, a population of naïve CD4+ T cells can integrate the signals from their environment to generate a functionally diverse population with robust commitment of individual cells. Notably, two positive feedback loops in this network motif govern three bistable switches, which in turn, give rise to three types of heterogeneous differentiated states, depending upon particular combinations of input signals. We provide three prototype models illustrating how to use this framework to explain experimental observations and make specific testable predictions. Conclusions The process in which several types of T helper cells are generated simultaneously to mount complex immune responses upon pathogenic challenges can be highly regulated, and a simple signaling network motif can be responsible for generating all possible types of heterogeneous populations with respect to a pair of master regulators controlling CD4+ T cell differentiation. The framework provides a mathematical basis for understanding the decision-making mechanisms of CD4+ T cells, and it can be

  10. Theoretical Analysis of Canadian Lifelong Education Development

    ERIC Educational Resources Information Center

    Mukan, Natalia; Barabash, Olena; Busko, Maria

    2014-01-01

    In the article, the problem of Canadian lifelong education development has been studied. The main objectives of the article are defined as theoretical analysis of scientific and pedagogical literature which highlights different aspects of the research problem; periods of lifelong education development; and determination of lifelong learning role…

  11. Simple gas chromatographic method for furfural analysis.

    PubMed

    Gaspar, Elvira M S M; Lopes, João F

    2009-04-03

    A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSD<8%), showed good recoveries (77-107%) and good limits of detection (GC-FID: 1.37 microgL(-1) for 2-F, 8.96 microgL(-1) for 5-MF, 6.52 microgL(-1) for 5-HMF; GC-TOF-MS: 0.3, 1.2 and 0.9 ngmL(-1) for 2-F, 5-MF and 5-HMF, respectively). It was applied to different commercial food matrices: honey, white, demerara, brown and yellow table sugars, and white and red balsamic vinegars. This one-step, sensitive and direct method for the analysis of furfurals will contribute to characterise and quantify their presence in the human diet.

  12. Simple BiCMOS CCCTA design and resistorless analog function realization.

    PubMed

    Tangsrirat, Worapong

    2014-01-01

    The simple realization of the current-controlled conveyor transconductance amplifier (CCCTA) in BiCMOS technology is introduced. The proposed BiCMOS CCCTA realization is based on the use of differential pair and basic current mirror, which results in simple structure. Its characteristics, that is, parasitic resistance (R x) and current transfer (i o/i z), are also tunable electronically by external bias currents. The realized circuit is suitable for fabrication using standard 0.35 μm BiCMOS technology. Some simple and compact resistorless applications employing the proposed CCCTA as active elements are also suggested, which show that their circuit characteristics with electronic controllability are obtained. PSPICE simulation results demonstrating the circuit behaviors and confirming the theoretical analysis are performed.

  13. Application of information-theoretic measures to quantitative analysis of immunofluorescent microscope imaging.

    PubMed

    Shutin, Dmitriy; Zlobinskaya, Olga

    2010-02-01

    The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  14. Category Theoretic Analysis of Hierarchical Protein Materials and Social Networks

    PubMed Central

    Spivak, David I.; Giesa, Tristan; Wood, Elizabeth; Buehler, Markus J.

    2011-01-01

    Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we describe an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a “concept web” or “semantic network” except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other olog. We consider simple cases of beta-helical and amyloid-like protein filaments subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog for the social network feature identical category-theoretic representations, and we proceed to precisely explicate the analogy or isomorphism between them. The examples presented here demonstrate that the intrinsic nature of a complex system, which in particular includes a precise relationship between structure and function at different hierarchical levels, can be effectively represented by an olog. This, in turn, allows for comparative studies between disparate materials or fields of application, and results in novel approaches to derive functionality in the design of de novo hierarchical systems. We discuss opportunities and challenges associated with the description of complex biological materials by using ologs as a powerful tool for analysis and design in the context of materiomics, and we present the potential impact of this approach for engineering, life sciences, and medicine. PMID:21931622

  15. A new simple local muscle recovery model and its theoretical and experimental validation.

    PubMed

    Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu

    2015-01-01

    This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.

  16. Python for Information Theoretic Analysis of Neural Data

    PubMed Central

    Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano

    2008-01-01

    Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557

  17. Simple Tidal Prism Models Revisited

    NASA Astrophysics Data System (ADS)

    Luketina, D.

    1998-01-01

    Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.

  18. Modal cost analysis for simple continua

    NASA Technical Reports Server (NTRS)

    Hu, A.; Skelton, R. E.; Yang, T. Y.

    1988-01-01

    The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.

  19. Simple Rules, Not So Simple: The Use of International Ovarian Tumor Analysis (IOTA) Terminology and Simple Rules in Inexperienced Hands in a Prospective Multicenter Cohort Study.

    PubMed

    Meys, Evelyne; Rutten, Iris; Kruitwagen, Roy; Slangen, Brigitte; Lambrechts, Sandrina; Mertens, Helen; Nolting, Ernst; Boskamp, Dieuwke; Van Gorp, Toon

    2017-12-01

     To analyze how well untrained examiners - without experience in the use of International Ovarian Tumor Analysis (IOTA) terminology or simple ultrasound-based rules (simple rules) - are able to apply IOTA terminology and simple rules and to assess the level of agreement between non-experts and an expert.  This prospective multicenter cohort study enrolled women with ovarian masses. Ultrasound was performed by non-expert examiners and an expert. Ultrasound features were recorded using IOTA nomenclature, and used for classifying the mass by simple rules. Interobserver agreement was evaluated with Fleiss' kappa and percentage agreement between observers.  50 consecutive women were included. We observed 46 discrepancies in the description of ovarian masses when non-experts utilized IOTA terminology. Tumor type was misclassified often (n = 22), resulting in poor interobserver agreement between the non-experts and the expert (kappa = 0.39, 95 %-CI 0.244 - 0.529, percentage of agreement = 52.0 %). Misinterpretation of simple rules by non-experts was observed 57 times, resulting in an erroneous diagnosis in 15 patients (30 %). The agreement for classifying the mass as benign, malignant or inconclusive by simple rules was only moderate between the non-experts and the expert (kappa = 0.50, 95 %-CI 0.300 - 0.704, percentage of agreement = 70.0 %). The level of agreement for all 10 simple rules features varied greatly (kappa index range: -0.08 - 0.74, percentage of agreement 66 - 94 %).  Although simple rules are useful to distinguish benign from malignant adnexal masses, they are not that simple for untrained examiners. Training with both IOTA terminology and simple rules is necessary before simple rules can be introduced into guidelines and daily clinical practice. © Georg Thieme Verlag KG Stuttgart · New York.

  20. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. A theoretical model of the relationship between the h-index and other simple citation indicators.

    PubMed

    Bertoli-Barsotti, Lucio; Lando, Tommaso

    2017-01-01

    Of the existing theoretical formulas for the h -index, those recently suggested by Burrell (J Informetr 7:774-783, 2013b) and by Bertoli-Barsotti and Lando (J Informetr 9(4):762-776, 2015) have proved very effective in estimating the actual value of the h -index Hirsch (Proc Natl Acad Sci USA 102:16569-16572, 2005), at least at the level of the individual scientist. These approaches lead (or may lead) to two slightly different formulas, being based, respectively, on a "standard" and a "shifted" version of the geometric distribution. In this paper, we review the genesis of these two formulas-which we shall call the "basic" and "improved" Lambert- W formula for the h -index-and compare their effectiveness with that of a number of instances taken from the well-known Glänzel-Schubert class of models for the h -index (based, instead, on a Paretian model) by means of an empirical study. All the formulas considered in the comparison are "ready-to-use", i.e., functions of simple citation indicators such as: the total number of publications; the total number of citations; the total number of cited paper; the number of citations of the most cited paper. The empirical study is based on citation data obtained from two different sets of journals belonging to two different scientific fields: more specifically, 231 journals from the area of "Statistics and Mathematical Methods" and 100 journals from the area of "Economics, Econometrics and Finance", totaling almost 100,000 and 20,000 publications, respectively. The citation data refer to different publication/citation time windows, different types of "citable" documents, and alternative approaches to the analysis of the citation process ("prospective" and "retrospective"). We conclude that, especially in its improved version, the Lambert- W formula for the h -index provides a quite robust and effective ready-to-use rule that should be preferred to other known formulas if one's goal is (simply) to derive a reliable estimate of

  2. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  3. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data

    PubMed Central

    Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062

  4. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.

    PubMed

    Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.

  5. Leakage and spillover effects of forest management on carbon storage: theoretical insights from a simple model

    NASA Astrophysics Data System (ADS)

    Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco

    2009-04-01

    Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.

  6. A Theoretical Analysis of the Influence of Electroosmosis on the Effective Ionic Mobility in Capillary Zone Electrophoresis

    ERIC Educational Resources Information Center

    Hijnen, Hens

    2009-01-01

    A theoretical description of the influence of electroosmosis on the effective mobility of simple ions in capillary zone electrophoresis is presented. The mathematical equations derived from the space-charge model contain the pK[subscript a] value and the density of the weak acid surface groups as parameters characterizing the capillary. It is…

  7. A Theoretical Analysis of Why Hybrid Ensembles Work.

    PubMed

    Hsu, Kuo-Wei

    2017-01-01

    Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.

  8. Theory of freezing in simple systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Bagchi, B.

    The transition parameters for the freezing of two one-component liquids into crystalline solids are evaluated by two theoretical approaches. The first system considered is liquid sodium which crystallizes into a body-centered-cubic (bcc) lattice; the second system is the freezing of adhesive hard spheres into a face-centered-cubic (fcc) lattice. Two related theoretical techniques are used in this evaluation: One is based upon a recently developed bifurcation analysis; the other is based upon the theory of freezing developed by Ramakrishnan and Yussouff. For liquid sodium, where experimental information is available, the predictions of the two theories agree well with experiment and eachmore » other. The adhesive-hard-sphere system, which displays a triple point and can be used to fit some liquids accurately, shows a temperature dependence of the freezing parameters which is similar to Lennard-Jones systems. At very low temperature, the fractional density change on freezing shows a dramatic increase as a function of temperature indicating the importance of all the contributions due to the triplet direction correlation function. Also, we consider the freezing of a one-component liquid into a simple-cubic (sc) lattice by bifurcation analysis and show that this transition is highly unfavorable, independent of interatomic potential choice. The bifurcation diagrams for the three lattices considered are compared and found to be strikingly different. Finally, a new stability analysis of the bifurcation diagrams is presented.« less

  9. Frequency domain analysis of noise in simple gene circuits

    NASA Astrophysics Data System (ADS)

    Cox, Chris D.; McCollum, James M.; Austin, Derek W.; Allen, Michael S.; Dar, Roy D.; Simpson, Michael L.

    2006-06-01

    Recent advances in single cell methods have spurred progress in quantifying and analyzing stochastic fluctuations, or noise, in genetic networks. Many of these studies have focused on identifying the sources of noise and quantifying its magnitude, and at the same time, paying less attention to the frequency content of the noise. We have developed a frequency domain approach to extract the information contained in the frequency content of the noise. In this article we review our work in this area and extend it to explicitly consider sources of extrinsic and intrinsic noise. First we review applications of the frequency domain approach to several simple circuits, including a constitutively expressed gene, a gene regulated by transitions in its operator state, and a negatively autoregulated gene. We then review our recent experimental study, in which time-lapse microscopy was used to measure noise in the expression of green fluorescent protein in individual cells. The results demonstrate how changes in rate constants within the gene circuit are reflected in the spectral content of the noise in a manner consistent with the predictions derived through frequency domain analysis. The experimental results confirm our earlier theoretical prediction that negative autoregulation not only reduces the magnitude of the noise but shifts its content out to higher frequency. Finally, we develop a frequency domain model of gene expression that explicitly accounts for extrinsic noise at the transcriptional and translational levels. We apply the model to interpret a shift in the autocorrelation function of green fluorescent protein induced by perturbations of the translational process as a shift in the frequency spectrum of extrinsic noise and a decrease in its weighting relative to intrinsic noise.

  10. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    PubMed

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  11. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  12. Sensitive sub-Doppler nonlinear spectroscopy for hyperfine-structure analysis using simple atomizers

    NASA Astrophysics Data System (ADS)

    Mickadeit, Fritz K.; Kemp, Helen; Schafer, Julia; Tong, William M.

    1998-05-01

    Laser wave-mixing spectroscopy is presented as a sub-Doppler method that offers not only high spectral resolution, but also excellent detection sensitivity. It offers spectral resolution suitable for hyperfine structure analysis and isotope ratio measurements. In a non-planar backward- scattering four-wave mixing optical configuration, two of the three input beams counter propagate and the Doppler broadening is minimized, and hence, spectral resolution is enhanced. Since the signal is a coherent beam, optical collection is efficient and signal detection is convenient. This simple multi-photon nonlinear laser method offers un usually sensitive detection limits that are suitable for trace-concentration isotope analysis using a few different types of simple analytical atomizers. Reliable measurement of hyperfine structures allows effective determination of isotope ratios for chemical analysis.

  13. Au36(SePh)24 nanomolecules: synthesis, optical spectroscopy and theoretical analysis.

    PubMed

    Rambukwella, Milan; Chang, Le; Ravishanker, Anish; Fortunelli, Alessandro; Stener, Mauro; Dass, Amala

    2018-05-16

    Here, we report the synthesis of selenophenol (HSePh) protected Au36(SePh)24 nanomolecules via a ligand-exchange reaction of 4-tert-butylbenzenethiol (HSPh-tBu) protected Au36(SPh-tBu)24 with selenophenol, and its spectroscopic and theoretical analysis. Matrix assisted laser desorption ionization (MALDI) mass spectrometry, electrospray ionization (ESI) mass spectrometry and optical characterization confirm that the composition of the as synthesized product is predominantly Au36(SePh)24 nanomolecules. Size exclusion chromatography (SEC) was employed to isolate the Au36(SePh)24 and temperature dependent optical absorption studies and theoretical analysis were performed. Theoretically, an Independent Component Maps of Oscillator Strength (ICM-OS) analysis of simulated spectra shows that the enhancement in absorption intensity in Au36(SePh)24 with respect to Au36(SPh)24 can be ascribed to the absence of interference and/or increased long-range coupling between interband metal core and ligand excitations. This work demonstrates and helps to understand the effect of Au-Se bridging on the properties of gold nanomolecules.

  14. SimpleITK Image-Analysis Notebooks: a Collaborative Environment for Education and Reproducible Research.

    PubMed

    Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard

    2018-06-01

    Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .

  15. A Theoretical Analysis of Why Hybrid Ensembles Work

    PubMed Central

    2017-01-01

    Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296

  16. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  17. High-accuracy self-mixing interferometer based on multiple reflections using a simple external reflecting mirror

    NASA Astrophysics Data System (ADS)

    Wang, Xiu-lin; Wei, Zheng; Wang, Rui; Huang, Wen-cai

    2018-05-01

    A self-mixing interferometer (SMI) with resolution twenty times higher than that of a conventional interferometer is developed by multiple reflections. Only by employing a simple external reflecting mirror, the multiple-pass optical configuration can be constructed. The advantage of the configuration is simple and easy to make the light re-injected back into the laser cavity. Theoretical analysis shows that the resolution of measurement is scalable by adjusting the number of reflections. The experiment shows that the proposed method has the optical resolution of approximate λ/40. The influence of displacement sensitivity gain ( G) is further analyzed and discussed in practical experiments.

  18. Theoretical Aspects of Speech Production.

    ERIC Educational Resources Information Center

    Stevens, Kenneth N.

    1992-01-01

    This paper on speech production in children and youth with hearing impairments summarizes theoretical aspects, including the speech production process, sound sources in the vocal tract, vowel production, and consonant production. Examples of spectra for several classes of vowel and consonant sounds in simple syllables are given. (DB)

  19. Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2007-01-01

    We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.

  20. Dream-reality confusion in borderline personality disorder: a theoretical analysis

    PubMed Central

    Skrzypińska, Dagna; Szmigielska, Barbara

    2015-01-01

    This paper presents an analysis of dream-reality confusion (DRC) in relation to the characteristics of borderline personality disorder (BPD), based on research findings and theoretical considerations. It is hypothesized that people with BPD are more likely to experience DRC compared to people in non-clinical population. Several variables related to this hypothesis were identified through a theoretical analysis of the scientific literature. Sleep disturbances: problems with sleep are found in 15–95.5% of people with BPD (Hafizi, 2013), and unstable sleep and wake cycles, which occur in BPD (Fleischer et al., 2012), are linked to DRC. Dissociation: nearly two-thirds of people with BPD experience dissociative symptoms (Korzekwa and Pain, 2009) and dissociative symptoms are correlated with a fantasy proneness; both dissociative symptoms and fantasy proneness are related to DRC (Giesbrecht and Merckelbach, 2006). Negative dream content: People with BPD have nightmares more often than other people (Semiz et al., 2008); dreams that are more likely to be confused with reality tend to be more realistic and unpleasant, and are reflected in waking behavior (Rassin et al., 2001). Cognitive disturbances: Many BPD patients experience various cognitive disturbances, including problems with reality testing (Fiqueierdo, 2006; Mosquera et al., 2011), which can foster DRC. Thin boundaries: People with thin boundaries are more prone to DRC than people with thick boundaries, and people with BPD tend to have thin boundaries (Hartmann, 2011). The theoretical analysis on the basis of these findings suggests that people who suffer from BPD may be more susceptible to confusing dream content with actual waking events. PMID:26441768

  1. Predicting excitonic gaps of semiconducting single-walled carbon nanotubes from a field theoretic analysis

    DOE PAGES

    Konik, Robert M.; Sfeir, Matthew Y.; Misewich, James A.

    2015-02-17

    We demonstrate that a non-perturbative framework for the treatment of the excitations of single walled carbon nanotubes based upon a field theoretic reduction is able to accurately describe experiment observations of the absolute values of excitonic energies. This theoretical framework yields a simple scaling function from which the excitonic energies can be read off. This scaling function is primarily determined by a single parameter, the charge Luttinger parameter of the tube, which is in turn a function of the tube chirality, dielectric environment, and the tube's dimensions, thus expressing disparate influences on the excitonic energies in a unified fashion. Asmore » a result, we test this theory explicitly on the data reported in [NanoLetters 5, 2314 (2005)] and [Phys. Rev. B 82, 195424 (2010)] and so demonstrate the method works over a wide range of reported excitonic spectra.« less

  2. A simple, sensitive graphical method of treating thermogravimetric analysis data

    Treesearch

    Abraham Broido

    1969-01-01

    Thermogravimetric Analysis (TGA) is finding increasing utility in investigations of the pyrolysis and combustion behavior of materuals. Although a theoretical treatment of the TGA behavior of an idealized reaction is relatively straight-forward, major complications can be introduced when the reactions are complex, e.g., in the pyrolysis of cellulose, and when...

  3. Analysis of pre-service physics teacher skills designing simple physics experiments based technology

    NASA Astrophysics Data System (ADS)

    Susilawati; Huda, C.; Kurniawan, W.; Masturi; Khoiri, N.

    2018-03-01

    Pre-service physics teacher skill in designing simple experiment set is very important in adding understanding of student concept and practicing scientific skill in laboratory. This study describes the skills of physics students in designing simple experiments based technologicall. The experimental design stages include simple tool design and sensor modification. The research method used is descriptive method with the number of research samples 25 students and 5 variations of simple physics experimental design. Based on the results of interviews and observations obtained the results of pre-service physics teacher skill analysis in designing simple experimental physics charged technology is good. Based on observation result, pre-service physics teacher skill in designing simple experiment is good while modification and sensor application are still not good. This suggests that pre-service physics teacher still need a lot of practice and do experiments in designing physics experiments using sensor modifications. Based on the interview result, it is found that students have high enough motivation to perform laboratory activities actively and students have high curiosity to be skilled at making simple practicum tool for physics experiment.

  4. Experimental Control of Simple Pendulum Model

    ERIC Educational Resources Information Center

    Medina, C.

    2004-01-01

    This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…

  5. Morphometric analysis of a fresh simple crater on the Moon.

    NASA Astrophysics Data System (ADS)

    Vivaldi, V.; Ninfo, A.; Massironi, M.; Martellato, E.; Cremonese, G.

    In this research we are proposing an innovative method to determine and quantify the morphology of a simple fresh impact crater. Linné is a well preserved impact crater of 2.2 km in diameter, located at 27.7oN 11.8oE, near the western edge of Mare Serenitatis on the Moon. The crater was photographed by the Lunar Orbiter and the Apollo space missions. Its particular morphology may place Linné as the most striking example of small fresh simple crater. Morphometric analysis, conducted on recent high resolution DTM from LROC (NASA), quantitatively confirmed the pristine morphology of the crater, revealing a clear inner layering which highlight a sequence of lava emplacement events.

  6. Path Analysis Tests of Theoretical Models of Children's Memory Performance

    ERIC Educational Resources Information Center

    DeMarie, Darlene; Miller, Patricia H.; Ferron, John; Cunningham, Walter R.

    2004-01-01

    Path analysis was used to test theoretical models of relations among variables known to predict differences in children's memory--strategies, capacity, and metamemory. Children in kindergarten to fourth grade (chronological ages 5 to 11) performed different memory tasks. Several strategies (i.e., sorting, clustering, rehearsal, and self-testing)…

  7. Theoretical analysis of intracortical microelectrode recordings

    NASA Astrophysics Data System (ADS)

    Lempka, Scott F.; Johnson, Matthew D.; Moffitt, Michael A.; Otto, Kevin J.; Kipke, Daryl R.; McIntyre, Cameron C.

    2011-08-01

    Advanced fabrication techniques have now made it possible to produce microelectrode arrays for recording the electrical activity of a large number of neurons in the intact brain for both clinical and basic science applications. However, the long-term recording performance desired for these applications is hindered by a number of factors that lead to device failure or a poor signal-to-noise ratio (SNR). The goal of this study was to identify factors that can affect recording quality using theoretical analysis of intracortical microelectrode recordings of single-unit activity. Extracellular microelectrode recordings were simulated with a detailed multi-compartment cable model of a pyramidal neuron coupled to a finite-element volume conductor head model containing an implanted recording microelectrode. Recording noise sources were also incorporated into the overall modeling infrastructure. The analyses of this study would be very difficult to perform experimentally; however, our model-based approach enabled a systematic investigation of the effects of a large number of variables on recording quality. Our results demonstrate that recording amplitude and noise are relatively independent of microelectrode size, but instead are primarily affected by the selected recording bandwidth, impedance of the electrode-tissue interface and the density and firing rates of neurons surrounding the recording electrode. This study provides the theoretical groundwork that allows for the design of the microelectrode and recording electronics such that the SNR is maximized. Such advances could help enable the long-term functionality required for chronic neural recording applications.

  8. Theoretical analysis of intracortical microelectrode recordings

    PubMed Central

    Lempka, Scott F; Johnson, Matthew D; Moffitt, Michael A; Otto, Kevin J; Kipke, Daryl R; McIntyre, Cameron C

    2011-01-01

    Advanced fabrication techniques have now made it possible to produce microelectrode arrays for recording the electrical activity of a large number of neurons in the intact brain for both clinical and basic science applications. However, the long-term recording performance desired for these applications is hindered by a number of factors that lead to device failure or a poor signal-to-noise ratio (SNR). The goal of this study was to identify factors that can affect recording quality using theoretical analysis of intracortical microelectrode recordings of single-unit activity. Extracellular microelectrode recordings were simulated with a detailed multi-compartment cable model of a pyramidal neuron coupled to a finite element volume conductor head model containing an implanted recording microelectrode. Recording noise sources were also incorporated into the overall modeling infrastructure. The analyses of this study would be very difficult to perform experimentally; however, our model-based approach enabled a systematic investigation of the effects of a large number of variables on recording quality. Our results demonstrate that recording amplitude and noise are relatively independent of microelectrode size, but instead are primarily affected by the selected recording bandwidth, impedance of the electrode-tissue interface, and the density and firing rates of neurons surrounding the recording electrode. This study provides the theoretical groundwork that allows for the design of the microelectrode and recording electronics such that the SNR is maximized. Such advances could help enable the long-term functionality required for chronic neural recording applications. PMID:21775783

  9. Theoretical analysis of the rotational barrier of ethane.

    PubMed

    Mo, Yirong; Gao, Jiali

    2007-02-01

    The understanding of the ethane rotation barrier is fundamental for structural theory and the conformational analysis of organic molecules and requires a consistent theoretical model to differentiate the steric and hyperconjugation effects. Due to recently renewed controversies over the barrier's origin, we developed a computational approach to probe the rotation barriers of ethane and its congeners in terms of steric repulsion, hyperconjugative interaction, and electronic and geometric relaxations. Our study reinstated that the conventional steric repulsion overwhelmingly dominates the barriers.

  10. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  11. Mode Deactivation Therapy (MDT) Family Therapy: A Theoretical Case Analysis

    ERIC Educational Resources Information Center

    Apsche, J. A.; Ward Bailey, S. R.

    2004-01-01

    This case study presents a theoretical analysis of implementing mode deactivation therapy (MDT) (Apsche & Ward Bailey, 2003) family therapy with a 13 year old Caucasian male. MDT is a form of cognitive behavioral therapy (CBT) that combines the balance of dialectical behavior therapy (DBT) (Linehan, 1993), the importance of perception from…

  12. Protein detection by Simple Western™ analysis.

    PubMed

    Harris, Valerie M

    2015-01-01

    Protein Simple© has taken a well-known protein detection method, the western blot, and revolutionized it. The Simple Western™ system uses capillary electrophoresis to identify and quantitate a protein of interest. Protein Simple© provides multiple detection apparatuses (Wes, Sally Sue, or Peggy Sue) that are suggested to save scientists valuable time by allowing the researcher to prepare the protein sample, load it along with necessary antibodies and substrates, and walk away. Within 3-5 h the protein will be separated by size, or charge, immuno-detection of target protein will be accurately quantitated, and results will be immediately made available. Using the Peggy Sue instrument, one study recently examined changes in MAPK signaling proteins in the sex-determining stage of gonadal development. Here the methodology is described.

  13. Theoretical analysis of stack gas emission velocity measurement by optical scintillation

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Dong, Feng-Zhong; Ni, Zhi-Bo; Pang, Tao; Zeng, Zong-Yong; Wu, Bian; Zhang, Zhi-Rong

    2014-04-01

    Theoretical analysis for an online measurement of the stack gas flow velocity based on the optical scintillation method with a structure of two parallel optical paths is performed. The causes of optical scintillation in a stack are first introduced. Then, the principle of flow velocity measurement and its mathematical expression based on cross correlation of the optical scintillation are presented. The field test results show that the flow velocity measured by the proposed technique in this article is consistent with the value tested by the Pitot tube. It verifies the effectiveness of this method. Finally, by use of the structure function of logarithmic light intensity fluctuations, the theoretical explanation of optical scintillation spectral characteristic in low frequency is given. The analysis of the optical scintillation spectrum provides the basis for the measurement of the stack gas flow velocity and particle concentration simultaneously.

  14. Theoretical and computational analyses of LNG evaporator

    NASA Astrophysics Data System (ADS)

    Chidambaram, Palani Kumar; Jo, Yang Myung; Kim, Heuy Dong

    2017-04-01

    Theoretical and numerical analysis on the fluid flow and heat transfer inside a LNG evaporator is conducted in this work. Methane is used instead of LNG as the operating fluid. This is because; methane constitutes over 80% of natural gas. The analytical calculations are performed using simple mass and energy balance equations. The analytical calculations are made to assess the pressure and temperature variations in the steam tube. Multiphase numerical simulations are performed by solving the governing equations (basic flow equations of continuity, momentum and energy equations) in a portion of the evaporator domain consisting of a single steam pipe. The flow equations are solved along with equations of species transport. Multiphase modeling is incorporated using VOF method. Liquid methane is the primary phase. It vaporizes into the secondary phase gaseous methane. Steam is another secondary phase which flows through the heating coils. Turbulence is modeled by a two equation turbulence model. Both the theoretical and numerical predictions are seen to match well with each other. Further parametric studies are planned based on the current research.

  15. Theoretical Notes on the Sociological Analysis of School Reform Networks

    ERIC Educational Resources Information Center

    Ladwig, James G.

    2014-01-01

    Nearly two decades ago, Ladwig outlined the theoretical and methodological implications of Bourdieu's concept of the social field for sociological analyses of educational policy and school reform. The current analysis extends this work to consider the sociological import of one of the most ubiquitous forms of educational reform found around…

  16. Sound propagation from a simple source in a wind tunnel

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III

    1975-01-01

    The nature of the acoustic field of a simple source in a wind tunnel under flow conditions was examined theoretically and experimentally. The motivation of the study was to establish aspects of the theoretical framework for interpreting acoustic data taken (in wind) tunnels using in wind microphones. Three distinct investigations were performed and are described in detail.

  17. Correcting the SIMPLE Model of Free Recall

    ERIC Educational Resources Information Center

    Lee, Michael D.; Pooley, James P.

    2013-01-01

    The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…

  18. Simple methods of exploiting the underlying structure of rule-based systems

    NASA Technical Reports Server (NTRS)

    Hendler, James

    1986-01-01

    Much recent work in the field of expert systems research has aimed at exploiting the underlying structures of the rule base for reasons of analysis. Such techniques as Petri-nets and GAGs have been proposed as representational structures that will allow complete analysis. Much has been made of proving isomorphisms between the rule bases and the mechanisms, and in examining the theoretical power of this analysis. In this paper we describe some early work in a new system which has much simpler (and thus, one hopes, more easily achieved) aims and less formality. The technique being examined is a very simple one: OPS5 programs are analyzed in a purely syntactic way and a FSA description is generated. In this paper we describe the technique and some user interface tools which exploit this structure.

  19. Theoretical NMR and conformational analysis of solvated oximes for organophosphates-inhibited acetylcholinesterase reactivation

    NASA Astrophysics Data System (ADS)

    da Silva, Jorge Alberto Valle; Modesto-Costa, Lucas; de Koning, Martijn C.; Borges, Itamar; França, Tanos Celmar Costa

    2018-01-01

    In this work, quaternary and non-quaternary oximes designed to bind at the peripheral site of acetylcholinesterase previously inhibited by organophosphates were investigated theoretically. Some of those oximes have a large number of degrees of freedom, thus requiring an accurate method to obtain molecular geometries. For this reason, the density functional theory (DFT) was employed to refine their molecular geometries after conformational analysis and to compare their 1H and 13C nuclear magnetic resonance (NMR) theoretical signals in gas-phase and in solvent. A good agreement with experimental data was achieved and the same theoretical approach was employed to obtain the geometries in water environment for further studies.

  20. Contrast Analysis: A Tutorial

    ERIC Educational Resources Information Center

    Haans, Antal

    2018-01-01

    Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…

  1. Regression Analysis by Example. 5th Edition

    ERIC Educational Resources Information Center

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  2. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  3. Cost analysis and outcomes of simple elbow dislocations

    PubMed Central

    Panteli, Michalis; Pountos, Ippokratis; Kanakaris, Nikolaos K; Tosounidis, Theodoros H; Giannoudis, Peter V

    2015-01-01

    AIM: To evaluate the management, clinical outcome and cost implications of three different treatment regimes for simple elbow dislocations. METHODS: Following institutional board approval, we performed a retrospective review of all consecutive patients treated for simple elbow dislocations in a Level I trauma centre between January 2008 and December 2010. Based on the length of elbow immobilisation (LOI), patients were divided in three groups (Group I, < 2 wk; Group II, 2-3 wk; and Group III, > 3 wk). Outcome was considered satisfactory when a patient could achieve a pain-free range of motion ≥ 100° (from 30° to 130°). The associated direct medical costs for the treatment of each patient were then calculated and analysed. RESULTS: We identified 80 patients who met the inclusion criteria. Due to loss to follow up, 13 patients were excluded from further analysis, leaving 67 patients for the final analysis. The mean LOI was 14 d (median 15 d; range 3-43 d) with a mean duration of hospital engagement of 67 d (median 57 d; range 10-351 d). Group III (prolonged immobilisation) had a statistically significant worse outcome in comparison to Group I and II (P = 0.04 and P = 0.01 respectively); however, there was no significant difference in the outcome between groups I and II (P = 0.30). No statistically significant difference in the direct medical costs between the groups was identified. CONCLUSION: The length of elbow immobilization doesn’t influence the medical cost; however immobilisation longer than three weeks is associated with persistent stiffness and a less satisfactory clinical outcome. PMID:26301180

  4. Meta-analysis of mismatch negativity to simple versus complex deviants in schizophrenia.

    PubMed

    Avissar, Michael; Xie, Shanghong; Vail, Blair; Lopez-Calderon, Javier; Wang, Yuanjia; Javitt, Daniel C

    2018-01-01

    Mismatch negativity (MMN) deficits in schizophrenia (SCZ) have been studied extensively since the early 1990s, with the vast majority of studies using simple auditory oddball task deviants that vary in a single acoustic dimension such as pitch or duration. There has been a growing interest in using more complex deviants that violate more abstract rules to probe higher order cognitive deficits. It is still unclear how sensory processing deficits compare to and contribute to higher order cognitive dysfunction, which can be investigated with later attention-dependent auditory event-related potential (ERP) components such as a subcomponent of P300, P3b. In this meta-analysis, we compared MMN deficits in SCZ using simple deviants to more complex deviants. We also pooled studies that measured MMN and P3b in the same study sample and examined the relationship between MMN and P3b deficits within study samples. Our analysis reveals that, to date, studies using simple deviants demonstrate larger deficits than those using complex deviants, with effect sizes in the range of moderate to large. The difference in effect sizes between deviant types was reduced significantly when accounting for magnitude of MMN measured in healthy controls. P3b deficits, while large, were only modestly greater than MMN deficits (d=0.21). Taken together, our findings suggest that MMN to simple deviants may still be optimal as a biomarker for SCZ and that sensory processing dysfunction contributes significantly to MMN deficit and disease pathophysiology. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Phenolic Analysis and Theoretic Design for Chinese Commercial Wines' Authentication.

    PubMed

    Li, Si-Yu; Zhu, Bao-Qing; Reeves, Malcolm J; Duan, Chang-Qing

    2018-01-01

    To develop a robust tool for Chinese commercial wines' varietal, regional, and vintage authentication, phenolic compounds in 121 Chinese commercial dry red wines were detected and quantified by using high-performance liquid chromatography triple-quadrupole mass spectrometry (HPLC-QqQ-MS/MS), and differentiation abilities of principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), and orthogonal partial least squares discriminant analysis (OPLS-DA) were compared. Better than PCA and PLS-DA, OPLS-DA models used to differentiate wines according to their varieties (Cabernet Sauvignon or other varieties), regions (east or west Cabernet Sauvignon wines), and vintages (young or old Cabernet Sauvignon wines) were ideally established. The S-plot provided in OPLS-DA models showed the key phenolic compounds which were both statistically and biochemically significant in sample differentiation. Besides, the potential of the OPLS-DA models in deeper sample differentiating of more detailed regional and vintage information of wines was proved optimistic. On the basis of our results, a promising theoretic design for wine authentication was further proposed for the first time, which might be helpful in practical authentication of more commercial wines. The phenolic data of 121 Chinese commercial dry red wines was processed with different statistical tools for varietal, regional, and vintage differentiation. A promising theoretical design was summarized, which might be helpful for wine authentication in practical situation. © 2017 Institute of Food Technologists®.

  6. Simple radiative transfer model for relationships between canopy biomass and reflectance

    NASA Technical Reports Server (NTRS)

    Park, J. K.; Deering, D. W.

    1982-01-01

    A modified Kubelka-Munk model has been utilized to derive useful equations for the analysis of apparent canopy reflectance. Based on the solution to the model simple working equations were formulated by employing reflectance characteristic parameters. The relationships derived show the asymptotic nature of reflectance data that is typically observed in remote sensing studies of plant biomass. They also establish the range of expected apparent canopy reflectance values for specific plant canopy types. The usefulness of the simplified equations was demonstrated by the exceptionally close fit of the theoretical curves to two separately acquired data sets for alfalfa and shortgrass prairie canopies.

  7. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less

  8. THE MATHEMATICAL ANALYSIS OF A SIMPLE DUEL

    DTIC Science & Technology

    The principles and techniques of simple Markov processes are used to analyze a simple duel to determine the limiting state probabilities (i.e., the...probabilities of occurrence of the various possible outcomes of the duel ). The duel is one in which A fires at B at a rate of r sub A shots per minute

  9. Global Study of the Simple Pendulum by the Homotopy Analysis Method

    ERIC Educational Resources Information Center

    Bel, A.; Reartes, W.; Torresi, A.

    2012-01-01

    Techniques are developed to find all periodic solutions in the simple pendulum by means of the homotopy analysis method (HAM). This involves the solution of the equations of motion in two different coordinate representations. Expressions are obtained for the cycles and periods of oscillations with a high degree of accuracy in the whole range of…

  10. Solvent-Ion Interactions in Salt Water: A Simple Experiment.

    ERIC Educational Resources Information Center

    Willey, Joan D.

    1984-01-01

    Describes a procedurally quick, simple, and inexpensive experiment which illustrates the magnitude and some effects of solvent-ion interactions in aqueous solutions. Theoretical information, procedures, and examples of temperature, volume and hydration number calculations are provided. (JN)

  11. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  12. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  13. Theoretical analysis of a ceramic plate thickness-shear mode piezoelectric transformer.

    PubMed

    Xu, Limei; Zhang, Ying; Fan, Hui; Hu, Junhui; Yang, Jiashi

    2009-03-01

    We perform a theoretical analysis on a ceramic plate piezoelectric transformer operating with thickness-shear modes. Mindlin's first-order theory of piezoelectric plates is employed, and a forced vibration solution is obtained. Transforming ratio, resonant frequencies, and vibration mode shapes are calculated, and the effects of plate thickness and electrode dimension are examined.

  14. Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.

    ERIC Educational Resources Information Center

    Jackson, David P.

    1996-01-01

    Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…

  15. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  16. An Isotopic Dilution Experiment Using Liquid Scintillation: A Simple Two-System, Two-Phase Analysis.

    ERIC Educational Resources Information Center

    Moehs, Peter J.; Levine, Samuel

    1982-01-01

    A simple isotonic, dilution analysis whose principles apply to methods of more complex radioanalyses is described. Suitable for clinical and instrumental analysis chemistry students, experimental manipulations are kept to a minimum involving only aqueous extraction before counting. Background information, procedures, and results are discussed.…

  17. Superimposed Code Theoretic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2008-01-01

    complements of one another and the DNA duplex formed is a Watson - Crick (WC) duplex. However, there are many instances when the formation of non-WC...that the user’s requirements for probe selection are met based on the Watson - Crick probe locality within a target. The second type, called...AFRL-RI-RS-TR-2007-288 Final Technical Report January 2008 SUPERIMPOSED CODE THEORETIC ANALYSIS OF DNA CODES AND DNA COMPUTING

  18. A theoretical analysis of the current-voltage characteristics of solar cells

    NASA Technical Reports Server (NTRS)

    Fang, R. C. Y.; Hauser, J. R.

    1977-01-01

    The correlation of theoretical and experimental data is discussed along with the development of a complete solar cell analysis. The dark current-voltage characteristics, and the parameters for solar cells are analyzed. The series resistance, and impurity gradient effects on solar cells were studied, the effects of nonuniformities on solar cell performance were analyzed.

  19. Theoretical and methodological approaches in discourse analysis.

    PubMed

    Stevenson, Chris

    2004-01-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a framework for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  20. Theoretical and methodological approaches in discourse analysis.

    PubMed

    Stevenson, Chris

    2004-10-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power. Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a frame- work for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  1. Dependence of tropical cyclone development on coriolis parameter: A theoretical model

    NASA Astrophysics Data System (ADS)

    Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda

    2018-03-01

    A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.

  2. Simple arithmetic: not so simple for highly math anxious individuals

    PubMed Central

    Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-01-01

    Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499

  3. Theoretical analysis of fused tapered side-pumping combiner for all-fiber lasers and amplifiers

    NASA Astrophysics Data System (ADS)

    Lei, Chengmin; Chen, Zilun; Leng, Jinyong; Gu, Yanran; Hou, Jing

    2017-05-01

    We report detailed theoretical analysis on the influence of the fused depth, launch mode and taper ratio on the performance of side-pumping combiner. The theoretical analysis indicates that the coupling efficiency and loss mechanism of the combiner is closely related to the fused depth, tapering ratio and the launch mode. Experimentally, we fabricate combiners consisting of two pump fibers (220/242 μm, NA=0.22) and a signal fiber (20/400 μm, NA=0.46). The combined pump coupling efficiency of two pump port is 97.2% with the maximum power handling of 1.8 kW and the insertion signal loss is less than 3%.

  4. Theoretical analysis of hot electron dynamics in nanorods

    PubMed Central

    Kumarasinghe, Chathurangi S.; Premaratne, Malin; Agrawal, Govind P.

    2015-01-01

    Localised surface plasmons create a non-equilibrium high-energy electron gas in nanostructures that can be injected into other media in energy harvesting applications. Here, we derive the rate of this localised-surface-plasmon mediated generation of hot electrons in nanorods and the rate of injecting them into other media by considering quantum mechanical motion of the electron gas. Specifically, we use the single-electron wave function of a particle in a cylindrical potential well and the electric field enhancement factor of an elongated ellipsoid to derive the energy distribution of electrons after plasmon excitation. We compare the performance of nanorods with equivolume nanoparticles of other shapes such as nanospheres and nanopallets and report that nanorods exhibit significantly better performance over a broad spectrum. We present a comprehensive theoretical analysis of how different parameters contribute to efficiency of hot-electron harvesting in nanorods and reveal that increasing the aspect ratio can increase the hot-electron generation and injection, but the volume shows an inverse dependency when efficiency per unit volume is considered. Further, the electron thermalisation time shows much less influence on the injection rate. Our derivations and results provide the much needed theoretical insight for optimization of hot-electron harvesting process in highly adaptable metallic nanorods. PMID:26202823

  5. A Unifying Framework for Causal Analysis in Set-Theoretic Multimethod Research

    ERIC Educational Resources Information Center

    Rohlfing, Ingo; Schneider, Carsten Q.

    2018-01-01

    The combination of Qualitative Comparative Analysis (QCA) with process tracing, which we call set-theoretic multimethod research (MMR), is steadily becoming more popular in empirical research. Despite the fact that both methods have an elected affinity based on set theory, it is not obvious how a within-case method operating in a single case and a…

  6. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  7. Study of Simple MPPT Converter Topologies for Grid Integration of Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Zakis, Janis; Vinnikov, Dmitri

    2011-01-01

    This paper presents a study of two simple MPPT converter topologies for grid integration of photovoltaic (PV) systems. A general description and a steady state analysis of the discussed converters are presented. Main operating modes of the converters are explained. Calculations of main circuit element parameters are provided. Experimental setups of the MPPT converters with the power of 800 W were developed and verified by means of main operation waveforms. Also, experimental and theoretical boost properties of the studied topologies are compared. Finally, the integration possibilities of the presented MPPT converters with a grid side inverter are discussed and verified by simulations.

  8. Simple lock-in detection technique utilizing multiple harmonics for digital PGC demodulators.

    PubMed

    Duan, Fajie; Huang, Tingting; Jiang, Jiajia; Fu, Xiao; Ma, Ling

    2017-06-01

    A simple lock-in detection technique especially suited for digital phase-generated carrier (PGC) demodulators is proposed in this paper. It mixes the interference signal with rectangular waves whose Fourier expansions contain multiple odd or multiple even harmonics of the carrier to recover the quadrature components needed for interference phase demodulation. In this way, the use of a multiplier is avoided and the efficiency of the algorithm is improved. Noise performance with regard to light intensity variation and circuit noise is analyzed theoretically for both the proposed technique and the traditional lock-in technique, and results show that the former provides a better signal-to-noise ratio than the latter with proper modulation depth and average interference phase. Detailed simulations were conducted and the theoretical analysis was verified. A fiber-optic Michelson interferometer was constructed and the feasibility of the proposed technique is demonstrated.

  9. Social Network Analysis: A Simple but Powerful Tool for Identifying Teacher Leaders

    ERIC Educational Resources Information Center

    Smith, P. Sean; Trygstad, Peggy J.; Hayes, Meredith L.

    2018-01-01

    Instructional teacher leadership is central to a vision of distributed leadership. However, identifying instructional teacher leaders can be a daunting task, particularly for administrators who find themselves either newly appointed or faced with high staff turnover. This article describes the use of social network analysis (SNA), a simple but…

  10. A Simple Approach to Achieve Modified Projective Synchronization between Two Different Chaotic Systems

    PubMed Central

    2013-01-01

    A new approach, the projective system approach, is proposed to realize modified projective synchronization between two different chaotic systems. By simple analysis of trajectories in the phase space, a projective system of the original chaotic systems is obtained to replace the errors system to judge the occurrence of modified projective synchronization. Theoretical analysis and numerical simulations show that, although the projective system may not be unique, modified projective synchronization can be achieved provided that the origin of any of projective systems is asymptotically stable. Furthermore, an example is presented to illustrate that even a necessary and sufficient condition for modified projective synchronization can be derived by using the projective system approach. PMID:24187522

  11. Is simple nephrectomy truly simple? Comparison with the radical alternative.

    PubMed

    Connolly, S S; O'Brien, M Frank; Kunni, I M; Phelan, E; Conroy, R; Thornhill, J A; Grainger, R

    2011-03-01

    The Oxford English dictionary defines the term "simple" as "easily done" and "uncomplicated". We tested the validity of this terminology in relation to open nephrectomy surgery. Retrospective review of 215 patients undergoing open, simple (n = 89) or radical (n = 126) nephrectomy in a single university-affiliated institution between 1998 and 2002. Operative time (OT), estimated blood loss (EBL), operative complications (OC) and length of stay in hospital (LOS) were analysed. Statistical analysis employed Fisher's exact test and Stata Release 8.2. Simple nephrectomy was associated with shorter OT (mean 126 vs. 144 min; p = 0.002), reduced EBL (mean 729 vs. 859 cc; p = 0.472), lower OC (9 vs. 17%; 0.087), and more brief LOS (mean 6 vs. 8 days; p < 0.001). All parameters suggest favourable outcome for the simple nephrectomy group, supporting the use of this terminology. This implies "simple" nephrectomies are truly easier to perform with less complication than their radical counterpart.

  12. Theoretical analysis of impact in composite plates

    NASA Technical Reports Server (NTRS)

    Moon, F. C.

    1973-01-01

    The calculated stresses and displacements induced anisotropic plates by short duration impact forces are presented. The theoretical model attempts to model the response of fiber composite turbine fan blades to impact by foreign objects such as stones and hailstones. In this model the determination of the impact force uses the Hertz impact theory. The plate response treats the laminated blade as an equivalent anisotropic material using a form of Mindlin's theory for crystal plates. The analysis makes use of a computational tool called the fast Fourier transform. Results are presented in the form of stress contour plots in the plane of the plate for various times after impact. Examination of the maximum stresses due to impact versus ply layup angle reveals that the + or - 15 deg layup angle gives lower flexural stresses than 0 deg, + or - 30 deg and + or - 45 deg. cases.

  13. Theoretical Analysis of Rain Attenuation Probability

    NASA Astrophysics Data System (ADS)

    Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan

    2007-07-01

    Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.

  14. Simple arithmetic: not so simple for highly math anxious individuals.

    PubMed

    Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-12-01

    Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.

  15. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  16. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  17. Experimental and theoretical oscillator strengths of Mg I for accurate abundance analysis

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Hartman, H.; Nilsson, H.; Jönsson, P.

    2017-02-01

    Context. With the aid of stellar abundance analysis, it is possible to study the galactic formation and evolution. Magnesium is an important element to trace the α-element evolution in our Galaxy. For chemical abundance analysis, such as magnesium abundance, accurate and complete atomic data are essential. Inaccurate atomic data lead to uncertain abundances and prevent discrimination between different evolution models. Aims: We study the spectrum of neutral magnesium from laboratory measurements and theoretical calculations. Our aim is to improve the oscillator strengths (f-values) of Mg I lines and to create a complete set of accurate atomic data, particularly for the near-IR region. Methods: We derived oscillator strengths by combining the experimental branching fractions with radiative lifetimes reported in the literature and computed in this work. A hollow cathode discharge lamp was used to produce free atoms in the plasma and a Fourier transform spectrometer recorded the intensity-calibrated high-resolution spectra. In addition, we performed theoretical calculations using the multiconfiguration Hartree-Fock program ATSP2K. Results: This project provides a set of experimental and theoretical oscillator strengths. We derived 34 experimental oscillator strengths. Except from the Mg I optical triplet lines (3p 3P°0,1,2-4s 3S1), these oscillator strengths are measured for the first time. The theoretical oscillator strengths are in very good agreement with the experimental data and complement the missing transitions of the experimental data up to n = 7 from even and odd parity terms. We present an evaluated set of oscillator strengths, gf, with uncertainties as small as 5%. The new values of the Mg I optical triplet line (3p 3P°0,1,2-4s 3S1) oscillator strength values are 0.08 dex larger than the previous measurements.

  18. A theoretical analysis of deformation behavior of auxetic plied yarn structure

    NASA Astrophysics Data System (ADS)

    Zeng, Jifang; Hu, Hong

    2018-07-01

    This paper presents a theoretical analysis of the auxetic plied yarn (APY) structure formed with two types of single yarns having different diameter and modulus. A model which can be used to predict its deformation behavior under axial extension is developed based on the theoretical analysis. The developed model is first compared with the experimental data obtained in the previous study, and then used to predict the effects of different structural and material parameters on the auxetic behavior of the APY. The calculation results show that the developed model can correctly predict the variation trend of the auxetic behavior of the APY, which first increases and then decrease with the increase of the axial strain. The calculation results also indicate that the auxetic behavior of the APY simultaneously depends on the diameter ratio of the soft yarn and stiff yarn as well as the ratio between the pitch length and stiff yarn diameter. The study provides a way to design and fabricate APYs with the same auxetic behavior by using different soft and stiff yarns as long as these two ratios are kept unchanged.

  19. A model-based analysis of a display for helicopter landing approach. [control theoretical model of human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Wheat, L. W.

    1975-01-01

    A control theoretic model of the human pilot was used to analyze a baseline electronic cockpit display in a helicopter landing approach task. The head down display was created on a stroke written cathode ray tube and the vehicle was a UH-1H helicopter. The landing approach task consisted of maintaining prescribed groundspeed and glideslope in the presence of random vertical and horizontal turbulence. The pilot model was also used to generate and evaluate display quickening laws designed to improve pilot vehicle performance. A simple fixed base simulation provided comparative tracking data.

  20. Analysis of NASA JP-4 fire tests data and development of a simple fire model

    NASA Technical Reports Server (NTRS)

    Raj, P.

    1980-01-01

    The temperature, velocity and species concentration data obtained during the NASA fire tests (3m, 7.5m and 15m diameter JP-4 fires) were analyzed. Utilizing the data analysis, a sample theoretical model was formulated to predict the temperature and velocity profiles in JP-4 fires. The theoretical model, which does not take into account the detailed chemistry of combustion, is capable of predicting the extent of necking of the fire near its base.

  1. A theoretical analysis of vacuum arc thruster performance

    NASA Technical Reports Server (NTRS)

    Polk, James E.; Sekerak, Mike; Ziemer, John K.; Schein, Jochen; Qi, Niansheng; Binder, Robert; Anders, Andre

    2001-01-01

    In vacuum arc discharges the current is conducted through vapor evaporated from the cathode surface. In these devices very dense, highly ionized plasmas can be created from any metallic or conducting solid used as the cathode. This paper describes theoretical models of performance for several thruster configurations which use vacuum arc plasma sources. This analysis suggests that thrusters using vacuum arc sources can be operated efficiently with a range of propellant options that gives great flexibility in specific impulse. In addition, the efficiency of plasma production in these devices appears to be largely independent of scale because the metal vapor is ionized within a few microns of the cathode electron emission sites, so this approach is well-suited for micropropulsion.

  2. Theoretical study of reactive and nonreactive turbulent coaxial jets

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Wakelyn, N. T.

    1976-01-01

    The hydrodynamic properties and the reaction kinetics of axisymmetric coaxial turbulent jets having steady mean quantities are investigated. From the analysis, limited to free turbulent boundary layer mixing of such jets, it is found that the two-equation model of turbulence is adequate for most nonreactive flows. For the reactive flows, where an allowance must be made for second order correlations of concentration fluctuations in the finite rate chemistry for initially inhomogeneous mixture, an equation similar to the concentration fluctuation equation of a related model is suggested. For diffusion limited reactions, the eddy breakup model based on concentration fluctuations is found satisfactory and simple to use. The theoretical results obtained from these various models are compared with some of the available experimental data.

  3. Electro-osmotic flow in coated nanocapillaries: a theoretical investigation.

    PubMed

    Marini Bettolo Marconi, Umberto; Monteferrante, Michele; Melchionna, Simone

    2014-12-14

    Motivated by recent experiments, we present a theoretical investigation of how the electro-osmotic flow occurring in a capillary is modified when its charged surfaces are coated with charged polymers. The theoretical treatment is based on a three-dimensional model consisting of a ternary fluid-mixture, representing the solvent and two species for the ions, confined between two parallel charged plates decorated with a fixed array of scatterers representing the polymer coating. The electro-osmotic flow, generated by a constant electric field applied in a direction parallel to the plates, is studied numerically by means of Lattice Boltzmann simulations. In order to gain further understanding we performed a simple theoretical analysis by extending the Stokes-Smoluchowski equation to take into account the porosity induced by the polymers in the region adjacent to the walls. We discuss the nature of the velocity profiles by focusing on the competing effects of the polymer charges and the frictional forces they exert. We show evidence of the flow reduction and of the flow inversion phenomenon when the polymer charge is opposite to the surface charge. By using the density of polymers and the surface charge as control variables, we propose a phase diagram that discriminates the direct and the reversed flow regimes and determines their dependence on the ionic concentration.

  4. Spectrum analysis of radar life signal in the three kinds of theoretical models

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Ma, J. F.; Wang, D.

    2017-02-01

    In the single frequency continuous wave radar life detection system, based on the Doppler effect, the theory model of radar life signal is expressed by the real function, and there is a phenomenon that can't be confirmed by the experiment. When the phase generated by the distance between the measured object and the radar measuring head is л of integer times, the main frequency spectrum of life signal (respiration and heartbeat) is not existed in radar life signal. If this phase is л/2 of odd times, the main frequency spectrum of breath and heartbeat frequency is the strongest. In this paper, we use the Doppler effect as the basic theory, using three different mathematical expressions——real function, complex exponential function and Bessel's function expansion form. They are used to establish the theoretical model of radar life signal. Simulation analysis revealed that the Bessel expansion form theoretical model solve the problem of real function form. Compared with the theoretical model of the complex exponential function, the derived spectral line is greatly reduced in the theoretical model of Bessel expansion form, which is more consistent with the actual situation.

  5. Simple stochastic model for El Niño with westerly wind bursts

    PubMed Central

    Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.

    2016-01-01

    Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821

  6. A Spatial Analysis and Game Theoretical Approach Over the Disputed Islands in the Aegean Sea

    DTIC Science & Technology

    2016-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited A SPATIAL ANALYSIS ...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A SPATIAL ANALYSIS AND GAME THEORETICAL APPROACH OVER THE DISPUTED ISLANDS...including perimeter, area, population, distance to Greece, distance to Turkey, and territorial water area. After applying spatial analysis to two

  7. Response of Simple, Model Systems to Extreme Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewing, Rodney C.; Lang, Maik

    2015-07-30

    The focus of the research was on the application of high-pressure/high-temperature techniques, together with intense energetic ion beams, to the study of the behavior of simple oxide systems (e.g., SiO 2, GeO 2, CeO 2, TiO 2, HfO 2, SnO 2, ZnO and ZrO 2) under extreme conditions. These simple stoichiometries provide unique model systems for the analysis of structural responses to pressure up to and above 1 Mbar, temperatures of up to several thousands of kelvin, and the extreme energy density generated by energetic heavy ions (tens of keV/atom). The investigations included systematic studies of radiation- and pressure-induced amorphizationmore » of high P-T polymorphs. By studying the response of simple stoichiometries that have multiple structural “outcomes”, we have established the basic knowledge required for the prediction of the response of more complex structures to extreme conditions. We especially focused on the amorphous state and characterized the different non-crystalline structure-types that result from the interplay of radiation and pressure. For such experiments, we made use of recent technological developments, such as the perforated diamond-anvil cell and in situ investigation using synchrotron x-ray sources. We have been particularly interested in using extreme pressures to alter the electronic structure of a solid prior to irradiation. We expected that the effects of modified band structure would be evident in the track structure and morphology, information which is much needed to describe theoretically the fundamental physics of track-formation. Finally, we investigated the behavior of different simple-oxide, composite nanomaterials (e.g., uncoated nanoparticles vs. core/shell systems) under coupled, extreme conditions. This provided insight into surface and boundary effects on phase stability under extreme conditions.« less

  8. Theoretical study and design of third-order random fiber laser

    NASA Astrophysics Data System (ADS)

    Xie, Zhaoxin; Shi, Wei; Fu, Shijie; Sheng, Quan; Yao, Jianquan

    2018-02-01

    We present result of achieving a random fiber laser at a working wavelength of 1178nm while pumping at 1018nm. The laser power is realized by 200m long cavity which includes three high reflectivity fiber Bragg gratings. This simple and efficient random fiber laser could provide a novel approach to realize low-threshold and high-efficiency 1178nm long wavelength laser. We theoretically analyzed the laser power in random fiber lasers at different pump power by changing three high reflectivity fiber Bragg gratings. We also calculated the forward and backward power of 1st-order stokes, 2nd-order stokes, 3rd-order stokes. With the theoretical analysis, we optimize the cavity's reflectivity to get higher laser power output. The forward random laser exhibits larger gain, the backward random laser has lower gain. By controlling the value of angle-cleaved end fiber's reflectivity to 3×10-7, when the high reflectivity increases from 0.01 to 0.99, the laser power increases, using this proposed configuration, the 1178nm random laser can be generated easily and stably.

  9. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  10. Electronic structure and photochemistry of squaraine dyes: basic theoretical analysis and direct detection of the photoisomer of a symmetrical squarylium cyanine.

    PubMed

    Momicchioli, Fabio; Tatikolov, Aleksandr S; Vanossi, Davide; Ponterini, Glauco

    2004-04-01

    The photoisomerization kinetics of a squaraine dye has been the object both of experimental investigation and of interpretation in the framework of a qualitative theoretical model formulated by the aid of simple HMO calculations and orbital symmetry considerations. Such a model has first confirmed that the electronic structure and the spectroscopic properties of symmetrical squaraines are related to those of the parent cyanines, with ketocyanines as intermediate systems. Extension of the approach to structures twisted by 90[degree] about a polymethine bond has then provided insight into the electronic aspects and the mechanism of the photoisomerization of the squaraine under study. The reaction, previously indirectly investigated by fluorescence analysis, has been directly monitored by laser flash photolysis. These experiments indicate that, while photoisomerization is likely the main radiationless decay route from the spectroscopic minimum of the lowest excited singlet state (S(1)), the cis photoisomer is produced with only a 1% yield, likely because of an unfavourable cis/trans branching ratio from the perpendicular minimum of the S(1)-state potential energy surface. In contrast with what found for symmetrical cyanines, an increase in the solvent polarity was found to accelerate both the direct, excited-state reaction and, to a much larger extent, the ground-state back-isomerization. Such observations are consistent with predictions of the theoretical model and provide a clue for the identification of the isomerization coordinate.

  11. Theoretical Studies of Microstrip Antennas : Volume II, Analysis and Synthesis of Multi-Frequency Elements

    DOT National Transportation Integrated Search

    1979-09-01

    Volume II of Theoretical Studies of Microstrip Antennas deals with the analysis and synthesis of several types of novel multi-resonant elements with emphasis on dual-frequency operation of rectangular microstrip patch antennas with or without externa...

  12. A simple analytical model for dynamics of time-varying target leverage ratios

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  13. Human movement analysis using stereophotogrammetry. Part 1: theoretical background.

    PubMed

    Cappozzo, Aurelio; Della Croce, Ugo; Leardini, Alberto; Chiari, Lorenzo

    2005-02-01

    This paper sets the stage for a series of reviews dealing with the problems associated with the reconstruction and analysis of in vivo skeletal system kinematics using optoelectronic stereophotogrammetric data. Instantaneous bone position and orientation and joint kinematic variable estimations are addressed in the framework of rigid body mechanics. The conceptual background to these exercises is discussed. Focus is placed on the experimental and analytical problem of merging the information relative to movement and that relative to the morphology of the anatomical body parts of interest. The various global and local frames that may be used in this context are defined. Common anatomical and mathematical conventions that can be used to describe joint kinematics are illustrated in a comparative fashion. The authors believe that an effort to systematize the different theoretical and experimental approaches to the problems involved and related nomenclatures, as currently reported in the literature, is needed to facilitate data and knowledge sharing, and to provide renewed momentum for the advancement of human movement analysis.

  14. Economic Analysis in the Pacific Northwest Land Resources Project: Theoretical Considerations and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Morse, D. R. A.; Sahlberg, J. T.

    1977-01-01

    The Pacific Northwest Land Resources Inventory Demonstration Project i s an a ttempt to combine a whole spectrum of heterogeneous geographic, institutional and applications elements in a synergistic approach to the evaluation of remote sensing techniques. This diversity is the prime motivating factor behind a theoretical investigation of alternative economic analysis procedures. For a multitude of reasons--simplicity, ease of understanding, financial constraints and credibility, among others--cost-effectiveness emerges as the most practical tool for conducting such evaluation determinatIons in the Pacific Northwest. Preliminary findings in two water resource application areas suggest, in conformity with most published studies, that Lands at-aided data collection methods enjoy substantial cost advantages over alternative techniques. The pntential for sensitivity analysis based on cost/accuracy tradeoffs is considered on a theoretical plane in the absence of current accuracy figures concerning the Landsat-aided approach.

  15. A simple randomisation procedure for validating discriminant analysis: a methodological note.

    PubMed

    Wastell, D G

    1987-04-01

    Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.

  16. The min-conflicts heuristic: Experimental and theoretical results

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip

    1991-01-01

    This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

  17. Simple Analysis of Historical Lime Mortars

    ERIC Educational Resources Information Center

    Pires, Joa~o

    2015-01-01

    A laboratory experiment is described in which a simple characterization of a historical lime mortar is made by the determination of its approximate composition by a gravimetric method. Fourier transform infrared (FTIR) spectroscopy and X-ray diffraction (XRD) are also used for the qualitative characterization of the lime mortar components. These…

  18. Theoretical analysis of heat flow in horizontal ribbon growth from a melt. [silicon metal

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.

    1978-01-01

    A theoretical heat flow analysis for horizontalribbon growth is presented. Equations are derived relating pull speed, ribbon thickness, thermal gradient in the melt, and melt temperature for limiting cases of heat removal by radiation only and isothermal heat removal from the solid surface over the melt. Geometrical cross sections of the growth zone are shown to be triangular and nearly parabolic for the two respective cases. Theoretical pull speed for silicon ribbon 0.01 cm thick, where the loss of latent heat of fusion is by radiation to ambient temperature (300 K) only, is shown to be 1 cm/sec for horizontal growth extending 2 cm over the melt and with no heat conduction either to or from the melt. Further enhancement of ribbon growth rate by placing cooling blocks adjacent to the top surface is shown to be theoretically possible.

  19. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T.

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less

  20. Interaction of Simple Ions with Water: Theoretical Models for the Study of Ion Hydration

    ERIC Educational Resources Information Center

    Gancheff, Jorge S.; Kremer, Carlos; Ventura, Oscar N.

    2009-01-01

    A computational experiment aimed to create and systematically analyze models of simple cation hydrates is presented. The changes in the structure (bond distances and angles) and the electronic density distribution of the solvent and the thermodynamic parameters of the hydration process are calculated and compared with the experimental data. The…

  1. A simple protocol for NMR analysis of the enantiomeric purity of chiral hydroxylamines.

    PubMed

    Tickell, David A; Mahon, Mary F; Bull, Steven D; James, Tony D

    2013-02-15

    A practically simple three-component chiral derivatization protocol for determining the enantiopurity of chiral hydroxylamines by (1)H NMR spectroscopic analysis is described, involving their treatment with 2-formylphenylboronic acid and enantiopure BINOL to afford a mixture of diastereomeric nitrono-boronate esters whose ratio is an accurate reflection of the enantiopurity of the parent hydroxylamine.

  2. A Simple Model for Nonlinear Confocal Ultrasonic Beams

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen

    2007-01-01

    A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.

  3. Theoretical Analysis of an Iron Mineral-Based Magnetoreceptor Model in Birds

    PubMed Central

    Solov'yov, Ilia A.; Greiner, Walter

    2007-01-01

    Sensing the magnetic field has been established as an essential part of navigation and orientation of various animals for many years. Only recently has the first detailed receptor concept for magnetoreception been published based on histological and physical results. The considered mechanism involves two types of iron minerals (magnetite and maghemite) that were found in subcellular compartments within sensory dendrites of the upper beak of several bird species. But so far a quantitative evaluation of the proposed receptor is missing. In this article, we develop a theoretical model to quantitatively and qualitatively describe the magnetic field effects among particles containing iron minerals. The analysis of forces acting between these subcellular compartments shows a particular dependence on the orientation of the external magnetic field. The iron minerals in the beak are found in the form of crystalline maghemite platelets and assemblies of magnetite nanoparticles. We demonstrate that the pull or push to the magnetite assemblies, which are connected to the cell membrane, may reach a value of 0.2 pN—sufficient to excite specific mechanoreceptive membrane channels in the nerve cell. The theoretical analysis of the assumed magnetoreceptor system in the avian beak skin clearly shows that it might indeed be a sensitive biological magnetometer providing an essential part of the magnetic map for navigation. PMID:17496012

  4. Graph-theoretic strengths of contextuality

    NASA Astrophysics Data System (ADS)

    de Silva, Nadish

    2017-03-01

    Cabello-Severini-Winter and Abramsky-Hardy (building on the framework of Abramsky-Brandenburger) both provide classes of Bell and contextuality inequalities for very general experimental scenarios using vastly different mathematical techniques. We review both approaches, carefully detail the links between them, and give simple, graph-theoretic methods for finding inequality-free proofs of nonlocality and contextuality and for finding states exhibiting strong nonlocality and/or contextuality. Finally, we apply these methods to concrete examples in stabilizer quantum mechanics relevant to understanding contextuality as a resource in quantum computation.

  5. A framework for biodynamic feedthrough analysis--part I: theoretical foundations.

    PubMed

    Venrooij, Joost; van Paassen, Marinus M; Mulder, Mark; Abbink, David A; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H

    2014-09-01

    Biodynamic feedthrough (BDFT) is a complex phenomenon, which has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, a framework for biodynamic feedthrough analysis is presented. The goal of this framework is two-fold. First, it provides some common ground between the seemingly large range of different approaches existing in the BDFT literature. Second, the framework itself allows for gaining new insights into BDFT phenomena. It will be shown how relevant signals can be obtained from measurement, how different BDFT dynamics can be derived from them, and how these different dynamics are related. Using the framework, BDFT can be dissected into several dynamical relationships, each relevant in understanding BDFT phenomena in more detail. The presentation of the BDFT framework is divided into two parts. This paper, Part I, addresses the theoretical foundations of the framework. Part II, which is also published in this issue, addresses the validation of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation.

  6. Jones matrix formulation of a Porro prism laser resonator with waveplates: theoretical and experimental analysis

    NASA Astrophysics Data System (ADS)

    Agrawal, L.; Bhardwaj, A.; Pal, S.; Kumar, A.

    2007-11-01

    This article presents the results of a detailed theoretical and experimental analysis carried out on a folded Z-shaped polarization coupled, electro-optically Q-switched laser resonator with Porro prisms and waveplates. The advantages of adding waveplates in a Porro prism resonator have been explored for creating high loss condition prior to Q-switching and obtaining variable reflectivity with fixed orientation of Porro prism. Generalized expressions have been derived in terms of azimuth angles and phase shifts introduced by the polarizing elements. These expressions corroborate with known reported results under appropriate substitutions. A specific case of a crossed Porro prism diode-pumped Nd:YAG laser has been theoretically and experimentally investigated. In the feedback arm, a 0.57λ waveplate oriented at 135° completely compensates the phase shift of a fused silica Porro prism and provides better tolerances than a BK-7 prism/0.60λ waveplate combination to stop prelasing. The fused silica prism/0.57λ combination with waveplate at 112° acts like a 100% mirror and was utilized for optimization of free running performance. The effective reflectivity was determined for various orientations of the quarter waveplate in the gain arm to numerically estimate the Q-switched laser pulse parameters through rate equation analysis. Experimental results match well with the theoretical analysis.

  7. Analysis and Simple Circuit Design of Double Differential EMG Active Electrode.

    PubMed

    Guerrero, Federico Nicolás; Spinelli, Enrique Mario; Haberman, Marcelo Alejandro

    2016-06-01

    In this paper we present an analysis of the voltage amplifier needed for double differential (DD) sEMG measurements and a novel, very simple circuit for implementing DD active electrodes. The three-input amplifier that standalone DD active electrodes require is inherently different from a differential amplifier, and general knowledge about its design is scarce in the literature. First, the figures of merit of the amplifier are defined through a decomposition of its input signal into three orthogonal modes. This analysis reveals a mode containing EMG crosstalk components that the DD electrode should reject. Then, the effect of finite input impedance is analyzed. Because there are three terminals, minimum bounds for interference rejection ratios due to electrode and input impedance unbalances with two degrees of freedom are obtained. Finally, a novel circuit design is presented, including only a quadruple operational amplifier and a few passive components. This design is nearly as simple as the branched electrode and much simpler than the three instrumentation amplifier design, while providing robust EMG crosstalk rejection and better input impedance using unity gain buffers for each electrode input. The interference rejection limits of this input stage are analyzed. An easily replicable implementation of the proposed circuit is described, together with a parameter design guideline to adjust it to specific needs. The electrode is compared with the established alternatives, and sample sEMG signals are obtained, acquired on different body locations with dry contacts, successfully rejecting interference sources.

  8. Theoretical analysis of the electrical aspects of the basic electro-impulse problem in aircraft de-icing applications

    NASA Technical Reports Server (NTRS)

    Henderson, R. A.; Schrag, R. L.

    1986-01-01

    A summary of modeling the electrical system aspects of a coil and metal target configuration resembling a practical electro-impulse deicing (EIDI) installation, and a simple circuit for providing energy to the coil, was presented. The model was developed in sufficient theoretical detail to allow the generation of computer algorithms for the current in the coil, the magnetic induction on both surfaces of the target, the force between the coil and target, and the impulse delivered to the target. These algorithms were applied to a specific prototype EIDI test system for which the current, magnetic fields near the target surfaces, and impulse were previously measured.

  9. Can Computer-Mediated Interventions Change Theoretical Mediators of Safer Sex? A Meta-Analysis

    ERIC Educational Resources Information Center

    Noar, Seth M.; Pierce, Larson B.; Black, Hulda G.

    2010-01-01

    The purpose of this study was to conduct a meta-analysis of computer-mediated interventions (CMIs) aimed at changing theoretical mediators of safer sex. Meta-analytic aggregation of effect sizes from k = 20 studies indicated that CMIs significantly improved HIV/AIDS knowledge, d = 0.276, p less than 0.001, k = 15, N = 6,625; sexual/condom…

  10. An experimental and theoretical analysis of a foil-air bearing rotor system

    NASA Astrophysics Data System (ADS)

    Bonello, P.; Hassan, M. F. Bin

    2018-01-01

    Although there is considerable research on the experimental testing of foil-air bearing (FAB) rotor systems, only a small fraction has been correlated with simulations from a full nonlinear model that links the rotor, air film and foil domains, due to modelling complexity and computational burden. An approach for the simultaneous solution of the three domains as a coupled dynamical system, introduced by the first author and adopted by independent researchers, has recently demonstrated its capability to address this problem. This paper uses this approach, with further developments, in an experimental and theoretical study of a FAB-rotor test rig. The test rig is described in detail, including issues with its commissioning. The theoretical analysis uses a recently introduced modal-based bump foil model that accounts for interaction between the bumps and their inertia. The imposition of pressure constraints on the air film is found to delay the predicted onset of instability speed. The results lend experimental validation to a recent theoretically-based claim that the Gümbel condition may not be appropriate for a practical single-pad FAB. The satisfactory prediction of the salient features of the measured nonlinear behavior shows that the air film is indeed highly influential on the response, in contrast to an earlier finding.

  11. Theoretical Analysis of the Electron Spiral Toroid Concept

    NASA Technical Reports Server (NTRS)

    Cambier, Jean-Luc; Micheletti, David A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This report describes the analysis of the Electron Spiral Toroid (EST) concept being promoted by Electron Power Systems Inc. (EPS). The EST is described as a toroidal plasma structure composed Of ion and electron shells. It is claimed that the EST requires little or no external confinement, despite the extraordinarily large energy densities resulting from the self-generating magnetic fields. The present analysis is based upon documentation made available by EPS, a previous description of the model by the Massachusetts Institute of Technology (MIT), and direct discussions with EPS and MIT. It is found that claims of absolute stability and large energy storage capacities of the EST concept have not been substantiated. Notably, it can be demonstrated that the ion fluid is fundamentally unstable. Although various scenarios for ion confinement were subsequently suggested by EPS and MIT, none were found to be plausible. Although the experimental data does not prove the existence of EST configurations, there is undeniable experimental evidence that some type of plasma structures whose characteristics remain to be determined are observed. However, more realistic theoretical models must first be developed to explain their existence and properties before applications of interest to NASA can he assessed and developed.

  12. A Simple Card Trick: Teaching Qualitative Data Analysis Using a Deck of Playing Cards

    ERIC Educational Resources Information Center

    Waite, Duncan

    2011-01-01

    Yet today, despite recent welcome additions, relatively little is written about teaching qualitative research. Why is that? This article reports out a relatively simple, yet appealing, pedagogical move, a lesson the author uses to teach qualitative data analysis. Data sorting and categorization, the use of tacit and explicit theory in data…

  13. Theoretical and simulation analysis of piezoelectric liquid resistance captor filled with pipeline

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Zhigang, Yang; Junwu, Kan; Lisheng; Bo, Yan; Dan, Lu

    2018-03-01

    This paper designs a kind of Piezoelectric liquid resistance capture energy device, by using the superposition theory of the sheet deformation, the calculation model of the displacement curve of the circular piezoelectric vibrator and the power generation capacity under the concentrated load is established. The results show that the radius ratio, thickness ratio and Young’s modulus of the circular piezoelectric vibrator have greater influence on the power generation capacity. When the material of piezoelectric oscillator is determined, the best radius ratio and thickness ratio make the power generation capacity the largest. Excessive or small radius ratio and thickness ratio will reduce the generating capacity and even generate zero power. In addition, the electromechanical equivalent model is established. Equivalent analysis is made by changing the circuit impedance. The results are consistent with the theoretical simulation results, indicating that the established circuit model can truly reflect the characteristics of the theoretical model.

  14. Improving the analysis of near-spectroscopy data with multivariate classification of hemodynamic patterns: a theoretical formulation and validation.

    PubMed

    Gemignani, Jessica; Middell, Eike; Barbour, Randall L; Graber, Harry L; Blankertz, Benjamin

    2018-04-04

    The statistical analysis of functional near infrared spectroscopy (fNIRS) data based on the general linear model (GLM) is often made difficult by serial correlations, high inter-subject variability of the hemodynamic response, and the presence of motion artifacts. In this work we propose to extract information on the pattern of hemodynamic activations without using any a priori model for the data, by classifying the channels as 'active' or 'not active' with a multivariate classifier based on linear discriminant analysis (LDA). This work is developed in two steps. First we compared the performance of the two analyses, using a synthetic approach in which simulated hemodynamic activations were combined with either simulated or real resting-state fNIRS data. This procedure allowed for exact quantification of the classification accuracies of GLM and LDA. In the case of real resting-state data, the correlations between classification accuracy and demographic characteristics were investigated by means of a Linear Mixed Model. In the second step, to further characterize the reliability of the newly proposed analysis method, we conducted an experiment in which participants had to perform a simple motor task and data were analyzed with the LDA-based classifier as well as with the standard GLM analysis. The results of the simulation study show that the LDA-based method achieves higher classification accuracies than the GLM analysis, and that the LDA results are more uniform across different subjects and, in contrast to the accuracies achieved by the GLM analysis, have no significant correlations with any of the demographic characteristics. Findings from the real-data experiment are consistent with the results of the real-plus-simulation study, in that the GLM-analysis results show greater inter-subject variability than do the corresponding LDA results. The results obtained suggest that the outcome of GLM analysis is highly vulnerable to violations of theoretical assumptions

  15. Theoretical considerations and a simple method for measuring alkalinity and acidity in low-pH waters by gran titration

    USGS Publications Warehouse

    Barringer, J.L.; Johnsson, P.A.

    1996-01-01

    Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.

  16. Theoretical model for optical properties of porphyrin

    NASA Astrophysics Data System (ADS)

    Phan, Anh D.; Nga, Do T.; Phan, The-Long; Thanh, Le T. M.; Anh, Chu T.; Bernad, Sophie; Viet, N. A.

    2014-12-01

    We propose a simple model to interpret the optical absorption spectra of porphyrin in different solvents. Our model successfully explains the decrease in the intensity of optical absorption at maxima of increased wavelengths. We also prove the dependence of the intensity and peak positions in the absorption spectra on the environment. The nature of the Soret band is supposed to derive from π plasmon. Our theoretical calculations are consistent with previous experimental studies.

  17. A constraint on antigravity of antimatter from precision spectroscopy of simple atoms

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2009-10-01

    Consideration of antigravity for antiparticles is an attractive target for various experimental projects. There are a number of theoretical arguments against it but it is not quite clear what kind of experimental data and theoretical suggestions are involved. In this paper we present straightforward arguments against a possibility of antigravity based on a few simple theoretical suggestions and some experimental data. The data are: astrophysical data on rotation of the Solar System in respect to the center of our galaxy and precision spectroscopy data on hydrogen and positronium. The theoretical suggestions for the case of absence of the gravitational field are: equality of electron and positron mass and equality of proton and positron charge. We also assume that QED is correct at the level of accuracy where it is clearly confirmed experimentally.

  18. Isolating the Effects of Training Using Simple Regression Analysis: An Example of the Procedure.

    ERIC Educational Resources Information Center

    Waugh, C. Keith

    This paper provides a case example of simple regression analysis, a forecasting procedure used to isolate the effects of training from an identified extraneous variable. This case example focuses on results of a three-day sales training program to improve bank loan officers' knowledge, skill-level, and attitude regarding solicitation and sale of…

  19. Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos

    NASA Astrophysics Data System (ADS)

    Meng, Lili; Li, Xinfu; Zhang, Guang

    2017-12-01

    In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.

  20. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials.

    PubMed

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound

  1. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials

    PubMed Central

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound

  2. Theoretical analysis of tsunami generation by pyroclastic flows

    USGS Publications Warehouse

    Watts, P.; Waythomas, C.F.

    2003-01-01

    Pyroclastic flows are a common product of explosive volcanism and have the potential to initiate tsunamis whenever thick, dense flows encounter bodies of water. We evaluate the process of tsunami generation by pyroclastic flow by decomposing the pyroclastic flow into two components, the dense underflow portion, which we term the pyroclastic debris flow, and the plume, which includes the surge and coignimbrite ash cloud parts of the flow. We consider five possible wave generation mechanisms. These mechanisms consist of steam explosion, pyroclastic debris flow, plume pressure, plume shear, and pressure impulse wave generation. Our theoretical analysis of tsunami generation by these mechanisms provides an estimate of tsunami features such as a characteristic wave amplitude and wavelength. We find that in most situations, tsunami generation is dominated by the pyroclastic debris flow component of a pyroclastic flow. This work presents information sufficient to construct tsunami sources for an arbitrary pyroclastic flow interacting with most bodies of water. Copyright 2003 by the American Geophysical Union.

  3. Thermoelectric Generation Of Current - Theoretical And Experimental Analysis

    NASA Astrophysics Data System (ADS)

    Ruciński, Adam; Rusowicz, Artur

    2017-12-01

    This paper provides some information about thermoelectric technology. Some new materials with improved figures of merit are presented. These materials in Peltier modules make it possible to generate electric current thanks to a temperature difference. The paper indicates possible applications of thermoelectric modules as interesting tools for using various waste heat sources. Some zero-dimensional equations describing the conditions of electric power generation are given. Also, operating parameters of Peltier modules, such as voltage and electric current, are analyzed. The paper shows chosen characteristics of power generation parameters. Then, an experimental stand for ongoing research and experimental measurements are described. The authors consider the resistance of a receiver placed in the electric circuit with thermoelectric elements. Finally, both the analysis of experimental results and conclusions drawn from theoretical findings are presented. Voltage generation of about 1.5 to 2.5 V for the temperature difference from 65 to 85 K was observed when a bismuth telluride thermoelectric couple (traditionally used in cooling technology) was used.

  4. Misleading Theoretical Assumptions in Hypertext/Hypermedia Research.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1997-01-01

    Reviews basic theoretical assumptions of research on learning with hypertext/hypermedia. Focuses on whether the results of research on hypertext/hypermedia-based learning support these assumptions. Results of empirical studies and theoretical analysis reveal that many research approaches have been misled by inappropriate theoretical assumptions on…

  5. A simple theoretical model of heat and moisture transport in multi-layer garments in cool ambient air.

    PubMed

    Wissler, Eugene H; Havenith, George

    2009-03-01

    Overall resistances for heat and vapor transport in a multilayer garment depend on the properties of individual layers and the thickness of any air space between layers. Under uncomplicated, steady-state conditions, thermal and mass fluxes are uniform within the garment, and the rate of transport is simply computed as the overall temperature or water concentration difference divided by the appropriate resistance. However, that simple computation is not valid under cool ambient conditions when the vapor permeability of the garment is low, and condensation occurs within the garment. Several recent studies have measured heat and vapor transport when condensation occurs within the garment (Richards et al. in Report on Project ThermProject, Contract No. G6RD-CT-2002-00846, 2002; Havenith et al. in J Appl Physiol 104:142-149, 2008). In addition to measuring cooling rates for ensembles when the skin was either wet or dry, both studies employed a flat-plate apparatus to measure resistances of individual layers. Those data provide information required to define the properties of an ensemble in terms of its individual layers. We have extended the work of previous investigators by developing a rather simple technique for analyzing heat and water vapor transport when condensation occurs within a garment. Computed results agree well with experimental results reported by Richards et al. (Report on Project ThermProject, Contract No. G6RD-CT-2002-00846, 2002) and Havenith et al. (J Appl Physiol 104:142-149, 2008). We discuss application of the method to human subjects for whom the rate of sweat secretion, instead of the partial pressure of water on the skin, is specified. Analysis of a more complicated five-layer system studied by Yoo and Kim (Text Res J 78:189-197, 2008) required an iterative computation based on principles defined in this paper.

  6. The Simple Lamb Wave Analysis to Characterize Concrete Wide Beams by the Practical MASW Test

    PubMed Central

    Lee, Young Hak; Oh, Taekeun

    2016-01-01

    In recent years, the Lamb wave analysis by the multi-channel analysis of surface waves (MASW) for concrete structures has been an effective nondestructive evaluation, such as the condition assessment and dimension identification by the elastic wave velocities and their reflections from boundaries. This study proposes an effective Lamb wave analysis by the practical application of MASW to concrete wide beams in an easy and simple manner in order to identify the dimension and elastic wave velocity (R-wave) for the condition assessment (e.g., the estimation of elastic properties). This is done by identifying the zero-order antisymmetric (A0) and first-order symmetric (S1) modes among multimodal Lamb waves. The MASW data were collected on eight concrete wide beams and compared to the actual depth and to the pressure (P-) wave velocities collected for the same specimen. Information is extracted from multimodal Lamb wave dispersion curves to obtain the elastic stiffness parameters and the thickness of the concrete structures. Due to the simple and cost-effective procedure associated with the MASW processing technique, the characteristics of several fundamental modes in the experimental Lamb wave dispersion curves could be measured. Available reference data are in good agreement with the parameters that were determined by our analysis scheme. PMID:28773562

  7. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    ERIC Educational Resources Information Center

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  8. An Information Theoretical Analysis of Human Insulin-Glucose System Toward the Internet of Bio-Nano Things.

    PubMed

    Abbasi, Naveed A; Akan, Ozgur B

    2017-12-01

    Molecular communication is an important tool to understand biological communications with many promising applications in Internet of Bio-Nano Things (IoBNT). The insulin-glucose system is of key significance among the major intra-body nanonetworks, since it fulfills metabolic requirements of the body. The study of biological networks from information and communication theoretical (ICT) perspective is necessary for their introduction in the IoBNT framework. Therefore, the objective of this paper is to provide and analyze for the first time in the literature, a simple molecular communication model of the human insulin-glucose system from ICT perspective. The data rate, channel capacity, and the group propagation delay are analyzed for a two-cell network between a pancreatic beta cell and a muscle cell that are connected through a capillary. The results point out a correlation between an increase in insulin resistance and a decrease in the data rate and channel capacity, an increase in the insulin transmission rate, and an increase in the propagation delay. We also propose applications for the introduction of the system in the IoBNT framework. Multi-cell insulin glucose system models may be based on this simple model to help in the investigation, diagnosis, and treatment of insulin resistance by means of novel IoBNT applications.

  9. Detection of allosteric signal transmission by information-theoretic analysis of protein dynamics

    PubMed Central

    Pandini, Alessandro; Fornili, Arianna; Fraternali, Franca; Kleinjung, Jens

    2012-01-01

    Allostery offers a highly specific way to modulate protein function. Therefore, understanding this mechanism is of increasing interest for protein science and drug discovery. However, allosteric signal transmission is difficult to detect experimentally and to model because it is often mediated by local structural changes propagating along multiple pathways. To address this, we developed a method to identify communication pathways by an information-theoretical analysis of molecular dynamics simulations. Signal propagation was described as information exchange through a network of correlated local motions, modeled as transitions between canonical states of protein fragments. The method was used to describe allostery in two-component regulatory systems. In particular, the transmission from the allosteric site to the signaling surface of the receiver domain NtrC was shown to be mediated by a layer of hub residues. The location of hubs preferentially connected to the allosteric site was found in close agreement with key residues experimentally identified as involved in the signal transmission. The comparison with the networks of the homologues CheY and FixJ highlighted similarities in their dynamics. In particular, we showed that a preorganized network of fragment connections between the allosteric and functional sites exists already in the inactive state of all three proteins.—Pandini, A., Fornili, A., Fraternali, F., Kleinjung, J. Detection of allosteric signal transmission by information-theoretic analysis of protein dynamics. PMID:22071506

  10. Experimental and theoretical study of magnetohydrodynamic ship models.

    PubMed

    Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.

  11. Experimental and theoretical study of magnetohydrodynamic ship models

    PubMed Central

    Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques. PMID:28665941

  12. Children's Criteria for Representational Adequacy in the Perception of Simple Sonic Stimuli

    ERIC Educational Resources Information Center

    Verschaffel, Lieven; Reybrouck, Mark; Jans, Christine; Van Dooren, Wim

    2010-01-01

    This study investigates children's metarepresentational competence with regard to listening to and making sense of simple sonic stimuli. Using diSessa's (2003) work on metarepresentational competence in mathematics and sciences as theoretical and empirical background, it aims to assess children's criteria for representational adequacy of graphical…

  13. People adopt optimal policies in simple decision-making, after practice and guidance.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2017-04-01

    Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.

  14. Theoretical analysis of sound transmission loss through graphene sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Natsuki, Toshiaki, E-mail: natsuki@shinshu-u.ac.jp; Institute of Carbon Science and Technology, Shinshu University, 4-17-1 Wakasato, Nagano 380-8553; Ni, Qing-Qing

    2014-11-17

    We examine the potential of using graphene sheets (GSs) as sound insulating materials that can be used for nano-devices because of their small size, super electronic, and mechanical properties. In this study, a theoretical analysis is proposed to predict the sound transmission loss through multi-layered GSs, which are formed by stacks of GS and bound together by van der Waals (vdW) forces between individual layers. The result shows that the resonant frequencies of the sound transmission loss occur in the multi-layered GSs and the values are very high. Based on the present analytical solution, we predict the acoustic insulation propertymore » for various layers of sheets under both normal incident wave and acoustic field of random incidence source. The scheme could be useful in vibration absorption application of nano devices and materials.« less

  15. [Efficacy analysis and theoretical study on Chinese herbal properties of Açaí (Euterpe oleracea)].

    PubMed

    Zhang, Jian-jun; Chen, Shao-hong; Zhu, Ying-li; Wang, Chun; Wang, Jing-xia; Wang, Lin-yuan; Gao, Xue-min

    2015-06-01

    Açaí (Euterpe oleracea) emerged as a source of herb has a long history in South America, which was approved by the Ministry of Health used in China and it has been introduced planting in Guangdong and Taiwan. This article summarized applied history of Açaí and its present status in China. Did theoretical study on the Chinese herbal properties of Açaí based on the Chinese traditional philosophical culture to analysis the function and symptom preliminary, combining with used for medical recordation, chemical component, biological activity. It is aiming at establishing the theoretical foundation for the application under the guidance of TCM theory.

  16. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity

    Treesearch

    Harbin Li; Steven G. McNulty

    2007-01-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...

  17. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  18. Prediction of Transport Properties of Permeants through Polymer Films. A Simple Gravimetric Experiment.

    ERIC Educational Resources Information Center

    Britton, L. N.; And Others

    1988-01-01

    Considers the applicability of the simple emersion/weight-gain method for predicting diffusion coefficients, solubilities, and permeation rates of chemicals in polymers that do not undergo physical and chemical deterioration. Presents the theoretical background, procedures and typical results related to this activity. (CW)

  19. Information theoretic quantification of diagnostic uncertainty.

    PubMed

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  20. The behavior of the adsorption of cytochrome C on lipid monolayers: A study by the Langmuir-Blodgett technique and theoretical analysis.

    PubMed

    Li, Junhua; Sun, Runguang; Hao, Changchun; He, Guangxiao; Zhang, Lei; Wang, Juan

    2015-10-01

    Cytochrome c (Cyt c) is an essential component of the inner mitochondrial respiratory chain because of its function of transferring electrons. The feature is closely related to the interaction between Cyt c and membrane lipids. We used Langmuir-Blodgett monolayer technique combined with AFM to study the interaction of Cyt c with lipid monolayers at air-buffer interface. In our work, by comparing the mixed Cyt c-anionic (DPPS) and Cyt c-zwitterionic (DPPC/DPPE) monolayers, the adsorption capacity of Cyt c on lipid monolayers is DPPS>DPPE>DPPC, which is attributed to their different headgroup structures. π-A isothermal data show that Cyt c (v=2.5 μL) molecules are at maximum adsorption quantity on lipid monolayer. Moreover, Cyt c molecules would form aggregations and drag some lipids with them into subphase if the protein exceeds the maximum adsorption quantity. π-T curve indicates that it takes more time for Cyt c molecular conformation to rearrange on DPPE monolayer than on DPPC. The compressibility study reveals that the adsorption or intermolecular aggregation of Cyt c molecules on lipid monolayer will change the membrane fluidization. In order to quantitatively estimate Cyt c molecular adsorption properties on lipid monolayers, we fit the experimental isotherm with a simple surface state equation. A theoretical model is also introduced to analyze the liquid expanded (LE) to liquid condensed (LC) phase transition of DPPC monolayer. The results of theoretical analysis are in good agreement with the experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Some Key Issues in Creating Inquiry-Based Instructional Practices that Aim at the Understanding of Simple Electric Circuits

    NASA Astrophysics Data System (ADS)

    Kock, Zeger-Jan; Taconis, Ruurd; Bolhuis, Sanneke; Gravemeijer, Koeno

    2013-04-01

    Many students in secondary schools consider the sciences difficult and unattractive. This applies to physics in particular, a subject in which students attempt to learn and understand numerous theoretical concepts, often without much success. A case in point is the understanding of the concepts current, voltage and resistance in simple electric circuits. In response to these problems, reform initiatives in education strive for a change of the classroom culture, putting emphasis on more authentic contexts and student activities containing elements of inquiry. The challenge then becomes choosing and combining these elements in such a manner that they foster an understanding of theoretical concepts. In this article we reflect on data collected and analyzed from a series of 12 grade 9 physics lessons on simple electric circuits. Drawing from a theoretical framework based on individual (conceptual change based) and socio-cultural views on learning, instruction was designed addressing known conceptual problems and attempting to create a physics (research) culture in the classroom. As the success of the lessons was limited, the focus of the study became to understand which inherent characteristics of inquiry based instruction complicate the process of constructing conceptual understanding. From the analysis of the data collected during the enactment of the lessons three tensions emerged: the tension between open inquiry and student guidance, the tension between students developing their own ideas and getting to know accepted scientific theories, and the tension between fostering scientific interest as part of a scientific research culture and the task oriented school culture. An outlook will be given on the implications for science lessons.

  2. HackAttack: Game-Theoretic Analysis of Realistic Cyber Conflicts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Brady, Andrew C; Brady, Ethan J

    Game theory is appropriate for studying cyber conflict because it allows for an intelligent and goal-driven adversary. Applications of game theory have led to a number of results regarding optimal attack and defense strategies. However, the overwhelming majority of applications explore overly simplistic games, often ones in which each participant s actions are visible to every other participant. These simplifications strip away the fundamental properties of real cyber conflicts: probabilistic alerting, hidden actions, unknown opponent capabilities. In this paper, we demonstrate that it is possible to analyze a more realistic game, one in which different resources have different weaknesses, playersmore » have different exploits, and moves occur in secrecy, but they can be detected. Certainly, more advanced and complex games are possible, but the game presented here is more realistic than any other game we know of in the scientific literature. While optimal strategies can be found for simpler games using calculus, case-by-case analysis, or, for stochastic games, Q-learning, our more complex game is more naturally analyzed using the same methods used to study other complex games, such as checkers and chess. We define a simple evaluation function and employ multi-step searches to create strategies. We show that such scenarios can be analyzed, and find that in cases of extreme uncertainty, it is often better to ignore one s opponent s possible moves. Furthermore, we show that a simple evaluation function in a complex game can lead to interesting and nuanced strategies.« less

  3. Metabolic acidosis in neonatal calf diarrhea-clinical findings and theoretical assessment of a simple treatment protocol.

    PubMed

    Trefz, F M; Lorch, A; Feist, M; Sauter-Louis, C; Lorenz, I

    2012-01-01

    Clinical assessment of metabolic acidosis in calves with neonatal diarrhea can be difficult because increased blood concentrations of d-lactate and not acidemia per se are responsible for most of the clinical signs exhibited by these animals. To describe the correlation between clinical and laboratory findings and d-lactate concentrations. Furthermore, the theoretical outcome of a simplified treatment protocol based on posture/ability to stand and degree of dehydration was evaluated. A total of 121 calves with diagnosis of neonatal diarrhea admitted to a veterinary teaching hospital during an 8-month study period. Prospective blinded cohort study. Physical examinations were carried out following a standardized protocol. Theoretical outcome of treatment was calculated. Type and degree of metabolic acidosis were age dependent. The clinical parameters posture, behavior, and palpebral reflex were closely correlated to base excess (r = 0.74, 0.78, 0.68; P < .001) and d-lactate concentrations (r = 0.59, 0.59, 0.71; P < .001), respectively. Thus, determining the degree of loss of the palpebral reflex was identified as the best clinical tool for diagnosing increase in serum d-lactate concentrations. Theoretical outcome of treatment revealed that the tested dosages of sodium bicarbonate are more likely to overdose than to underdose calves with diarrhea and metabolic acidosis. The degree of metabolic acidosis in diarrheic calves can be predicted based on clinical findings. The assessed protocol provides a useful tool to determine bicarbonate requirements, but a revision is necessary for calves with ability to stand and marked metabolic acidosis. Copyright © 2011 by the American College of Veterinary Internal Medicine.

  4. Overview of RICOR's reliability theoretical analysis, accelerated life demonstration test results and verification by field data

    NASA Astrophysics Data System (ADS)

    Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey

    2018-05-01

    The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.

  5. Simple and Multivariate Relationships Between Spiritual Intelligence with General Health and Happiness.

    PubMed

    Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud

    2016-08-01

    The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.

  6. Sibutramine characterization and solubility, a theoretical study

    NASA Astrophysics Data System (ADS)

    Aceves-Hernández, Juan M.; Nicolás Vázquez, Inés; Hinojosa-Torres, Jaime; Penieres Carrillo, Guillermo; Arroyo Razo, Gabriel; Miranda Ruvalcaba, René

    2013-04-01

    Solubility data from sibutramine (SBA) in a family of alcohols were obtained at different temperatures. Sibutramine was characterized by using thermal analysis and X-ray diffraction technique. Solubility data were obtained by the saturation method. The van't Hoff equation was used to obtain the theoretical solubility values and the ideal solvent activity coefficient. No polymorphic phenomena were found from the X-ray diffraction analysis, even though this compound is a racemic mixture of (+) and (-) enantiomers. Theoretical calculations showed that the polarisable continuum model was able to reproduce the solubility and stability of sibutramine molecule in gas phase, water and a family of alcohols at B3LYP/6-311++G (d,p) level of theory. Dielectric constant, dipolar moment and solubility in water values as physical parameters were used in those theoretical calculations for explaining that behavior. Experimental and theoretical results were compared and good agreement was obtained. Sibutramine solubility increased from methanol to 1-octanol in theoretical and experimental results.

  7. Theoretical analysis of a method for extracting the phase of a phase-amplitude modulated signal generated by a direct-modulated optical injection-locked semiconductor laser

    NASA Astrophysics Data System (ADS)

    Lee, Hwan; Cho, Jun-Hyung; Sung, Hyuk-Kee

    2017-05-01

    The phase modulation (PM) and amplitude modulation (AM) of optical signals can be achieved using a direct-modulated (DM) optical injection-locked (OIL) semiconductor laser. We propose and theoretically analyze a simple method to extract the phase component of a PM signal produced by a DM-OIL semiconductor laser. The pure AM component of the combined PM-AM signal can be isolated by square-law detection in a photodetector and can then be used to compensate for the PM-AM signal based on an optical homodyne method. Using the AM compensation technique, we successfully developed a simple and cost-effective phase extraction method applicable to the PM-AM optical signal of a DM-OIL semiconductor laser.

  8. Open source tools for the information theoretic analysis of neural data.

    PubMed

    Ince, Robin A A; Mazzoni, Alberto; Petersen, Rasmus S; Panzeri, Stefano

    2010-01-01

    The recent and rapid development of open source software tools for the analysis of neurophysiological datasets consisting of simultaneous multiple recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and for the integration of information obtained at different spatial and temporal scales. In this review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons, and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in MATLAB and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information.

  9. Field-theoretic approach to fluctuation effects in neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buice, Michael A.; Cowan, Jack D.; Mathematics Department, University of Chicago, Chicago, Illinois 60637

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governedmore » by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.« less

  10. Mixed-venous oxygen tension by nitrogen rebreathing - A critical, theoretical analysis.

    NASA Technical Reports Server (NTRS)

    Kelman, G. R.

    1972-01-01

    There is dispute about the validity of the nitrogen rebreathing technique for determination of mixed-venous oxygen tension. This theoretical analysis examines the circumstances under which the technique is likely to be applicable. When the plateau method is used the probable error in mixed-venous oxygen tension is plus or minus 2.5 mm Hg at rest, and of the order of plus or minus 1 mm Hg during exercise. Provided, that the rebreathing bag size is reasonably chosen, Denison's (1967) extrapolation technique gives results at least as accurate as those obtained by the plateau method. At rest, however, extrapolation should be to 30 rather than to 20 sec.

  11. Engineering design and theoretical analysis of nanoporous carbon membranes for gas separation

    NASA Astrophysics Data System (ADS)

    Acharya, Madhav

    1999-11-01

    Gases are used in a direct or indirect manner in virtually every major industry, such as steel manufacturing, oil production, foodstuffs and electronics. Membranes are being investigated as an alternative to established methods of gas separation such as pressure swing adsorption and cryogenic distillation. Membranes can be used in continuous operation and work very well at ambient conditions, thus representing a tremendous energy and economic saving over the other technologies. In addition, the integration of reaction and separation into a single unit known as a membrane reactor has the potential to revolutionize the chemical industry by making selective reactions a reality. Nanoporous carbons are highly disordered materials obtained from organic polymers or natural sources. They have the ability to separate gas molecules by several different mechanisms, and hence there is a growing effort to form them into membranes. In this study, nanoporous carbon membranes were prepared on macroporous stainless steel supports of both tubular and disk geometries. The precursor used was poly(furfuryl alcohol) and different synthesis protocols were employed. A spray coating method also was developed which allowed reproducible synthesis of membranes with very few defects. High gas selectivities were obtained such as O2/N2 = 6, H2/C2H 4 = 70 and CO2/N2 = 20. Membranes also were characterized using SEM and AFM, which revealed thin layers of carbon that were quite uniform and homogeneous. The simulation of nanoporous carbon structures also was carried out using a simple algorithmic approach. 5,6 and 7-membered rings were introduced into the structure, thus resulting in considerable curvature. The density of the structures were calculated and found to compare favorably with experimental findings. Finally, a theoretical analysis of size selective transport was performed using transition state theory concepts. A definite correlation of gas permeance with molecular size was obtained after

  12. Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.

    PubMed

    Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth

    2018-01-01

    Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.

  13. Theoretical ecology without species

    NASA Astrophysics Data System (ADS)

    Tikhonov, Mikhail

    The sequencing-driven revolution in microbial ecology demonstrated that discrete ``species'' are an inadequate description of the vast majority of life on our planet. Developing a novel theoretical language that, unlike classical ecology, would not require postulating the existence of species, is a challenge of tremendous medical and environmental significance, and an exciting direction for theoretical physics. Here, it is proposed that community dynamics can be described in a naturally hierarchical way in terms of population fluctuation eigenmodes. The approach is applied to a simple model of division of labor in a multi-species community. In one regime, effective species with a core and accessory genome are shown to naturally appear as emergent concepts. However, the same model allows a transition into a regime where the species formalism becomes inadequate, but the eigenmode description remains well-defined. Treating a community as a black box that expresses enzymes in response to resources reveals mathematically exact parallels between a community and a single coherent organism with its own fitness function. This coherence is a generic consequence of division of labor, requires no cooperative interactions, and can be expected to be widespread in microbial ecosystems. Harvard Center of Mathematical Sciences and Applications;John A. Paulson School of Engineering and Applied Sciences.

  14. Elastic Cherenkov effects in transversely isotropic soft materials-I: Theoretical analysis, simulations and inverse method

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping

    2016-11-01

    A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.

  15. Cosmic Star Formation: A Simple Model of the SFRD(z)

    NASA Astrophysics Data System (ADS)

    Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria

    2017-12-01

    We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).

  16. Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.

    PubMed

    Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe

    2014-01-01

    The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.

  17. Theoretical analysis of the correlation observed in fatigue crack growth rate parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chay, S.C.; Liaw, P.K.

    Fatigue crack growth rates have been found to follow the Paris-Erdogan rule, da/dN = C{sub o}({Delta}K){sup n}, for many steels, aluminum, nickel and copper alloys. The fatigue crack growth rate behavior in the Paris regime, thus, can be characterized by the parameters C{sub o} and n, which have been obtained for various materials. When n vs the logarithm of C{sub o} were plotted for various experimental results, a very definite linear relationship has been observed by many investigators, and questions have been raised as to the nature of this correlation. This paper presents a theoretical analysis that explains precisely whymore » such a linear correlation should exist between the two parameters, how strong the relationship should be, and how it can be predicted by analysis. This analysis proves that the source of such a correlation is of mathematical nature rather than physical.« less

  18. Theoretical foundations for finite-time transient stability and sensitivity analysis of power systems

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sambarta

    Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of

  19. A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.

    PubMed

    Robitaille, Line; Hoffer, L John

    2016-04-21

    In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.

  20. Simple effective rule to estimate the jamming packing fraction of polydisperse hard spheres.

    PubMed

    Santos, Andrés; Yuste, Santos B; López de Haro, Mariano; Odriozola, Gerardo; Ogarko, Vitaliy

    2014-04-01

    A recent proposal in which the equation of state of a polydisperse hard-sphere mixture is mapped onto that of the one-component fluid is extrapolated beyond the freezing point to estimate the jamming packing fraction ϕJ of the polydisperse system as a simple function of M1M3/M22, where Mk is the kth moment of the size distribution. An analysis of experimental and simulation data of ϕJ for a large number of different mixtures shows a remarkable general agreement with the theoretical estimate. To give extra support to the procedure, simulation data for seventeen mixtures in the high-density region are used to infer the equation of state of the pure hard-sphere system in the metastable region. An excellent collapse of the inferred curves up to the glass transition and a significant narrowing of the different out-of-equilibrium glass branches all the way to jamming are observed. Thus, the present approach provides an extremely simple criterion to unify in a common framework and to give coherence to data coming from very different polydisperse hard-sphere mixtures.

  1. Deep and Structured Robust Information Theoretic Learning for Image Analysis.

    PubMed

    Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai

    2016-07-07

    This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.

  2. Blade loss transient dynamics analysis, volume 1. Task 2: TETRA 2 theoretical development

    NASA Technical Reports Server (NTRS)

    Gallardo, Vincente C.; Black, Gerald

    1986-01-01

    The theoretical development of the forced steady state analysis of the structural dynamic response of a turbine engine having nonlinear connecting elements is discussed. Based on modal synthesis, and the principle of harmonic balance, the governing relations are the compatibility of displacements at the nonlinear connecting elements. There are four displacement compatibility equations at each nonlinear connection, which are solved by iteration for the principle harmonic of the excitation frequency. The resulting computer program, TETRA 2, combines the original TETRA transient analysis (with flexible bladed disk) with the steady state capability. A more versatile nonlinear rub or bearing element which contains a hardening (or softening) spring, with or without deadband, is also incorporated.

  3. NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.

  4. [Theoretical analysis of recompression-based therapies of decompression illness].

    PubMed

    Nikolaev, V P; Sokolov, G M; Komarevtsev, V N

    2011-01-01

    Theoretical analysis is concerned with the benefits of oxygen, air and nitrogen-helium-oxygen recompression schedules used to treat decompression illness in divers. Mathematical modeling of tissue bubbles dynamics during diving shows that one-hour oxygen recompression to 200 kPa does not diminish essentially the size of bubble enclosed in a layer that reduces tenfold the intensity of gas diffusion from bubbles. However, these bubbles dissolve fully in all the body tissues equally after 2-hr. air compression to 800 kPa and ensuing 2-d decompression by the Russian navy tables, and 1.5-hr. N-He-O2 compression to this pressure followed by 5-day decompression. The overriding advantage of the gas mixture recompression is that it obviates the narcotic action of nitrogen at the peak of chamber pressure and does not create dangerous tissue supersaturation and conditions for emergence of large bubbles at the end of decompression.

  5. Game-theoretic equilibrium analysis applications to deregulated electricity markets

    NASA Astrophysics Data System (ADS)

    Joung, Manho

    This dissertation examines game-theoretic equilibrium analysis applications to deregulated electricity markets. In particular, three specific applications are discussed: analyzing the competitive effects of ownership of financial transmission rights, developing a dynamic game model considering the ramp rate constraints of generators, and analyzing strategic behavior in electricity capacity markets. In the financial transmission right application, an investigation is made of how generators' ownership of financial transmission rights may influence the effects of the transmission lines on competition. In the second application, the ramp rate constraints of generators are explicitly modeled using a dynamic game framework, and the equilibrium is characterized as the Markov perfect equilibrium. Finally, the strategic behavior of market participants in electricity capacity markets is analyzed and it is shown that the market participants may exaggerate their available capacity in a Nash equilibrium. It is also shown that the more conservative the independent system operator's capacity procurement, the higher the risk of exaggerated capacity offers.

  6. Theoretical and Experimental Spectroscopic Analysis of Cyano-Substituted Styrylpyridine Compounds

    PubMed Central

    Castro, Maria Eugenia; Percino, Maria Judith; Chapela, Victor M.; Ceron, Margarita; Soriano-Moro, Guillermo; Lopez-Cruz, Jorge; Melendez, Francisco J.

    2013-01-01

    A combined theoretical and experimental study on the structure, infrared, UV-Vis and 1H NMR data of trans-2-(m-cyanostyryl)pyridine, trans-2-[3-methyl-(m-cyanostyryl)] pyridine and trans-4-(m-cyanostyryl)pyridine is presented. The synthesis was carried out with an efficient Knoevenagel condensation using green chemistry conditions. Theoretical geometry optimizations and their IR spectra were carried out using the Density Functional Theory (DFT) in both gas and solution phases. For theoretical UV-Vis and 1H NMR spectra, the Time-Dependent DFT (TD-DFT) and the Gauge-Including Atomic Orbital (GIAO) methods were used, respectively. The theoretical characterization matched the experimental measurements, showing a good correlation. The effect of cyano- and methyl-substituents, as well as of the N-atom position in the pyridine ring on the UV-Vis, IR and NMR spectra, was evaluated. The UV-Vis results showed no significant effect due to electron-withdrawing cyano- and electron-donating methyl-substituents. The N-atom position, however, caused a slight change in the maximum absorption wavelengths. The IR normal modes were assigned for the cyano- and methyl-groups. 1H NMR spectra showed the typical doublet signals due to protons in the trans position of a double bond. The theoretical characterization was visibly useful to assign accurately the signals in IR and 1H NMR spectra, as well as to identify the most probable conformation that could be present in the formation of the styrylpyridine-like compounds. PMID:23429190

  7. Free radical scavenging and COX-2 inhibition by simple colon metabolites of polyphenols: A theoretical approach.

    PubMed

    Amić, Ana; Marković, Zoran; Marković, Jasmina M Dimitrić; Jeremić, Svetlana; Lučić, Bono; Amić, Dragan

    2016-12-01

    Free radical scavenging and inhibitory potency against cyclooxygenase-2 (COX-2) by two abundant colon metabolites of polyphenols, i.e., 3-hydroxyphenylacetic acid (3-HPAA) and 4-hydroxyphenylpropionic acid (4-HPPA) were theoretically studied. Different free radical scavenging mechanisms are investigated in water and pentyl ethanoate as a solvent. By considering electronic properties of scavenged free radicals, hydrogen atom transfer (HAT) and sequential proton loss electron transfer (SPLET) mechanisms are found to be thermodynamically probable and competitive processes in both media. The Gibbs free energy change for reaction of inactivation of free radicals indicates 3-HPAA and 4-HPPA as potent scavengers. Their reactivity toward free radicals was predicted to decrease as follows: hydroxyl>alkoxyls>phenoxyl≈peroxyls>superoxide. Shown free radical scavenging potency of 3-HPAA and 4-HPPA along with their high μM concentration produced by microbial colon degradation of polyphenols could enable at least in situ inactivation of free radicals. Docking analysis with structural forms of 3-HPAA and 4-HPPA indicates dianionic ligands as potent inhibitors of COX-2, an inducible enzyme involved in colon carcinogenesis. Obtained results suggest that suppressing levels of free radicals and COX-2 could be achieved by 3-HPAA and 4-HPPA indicating that these compounds may contribute to reduced risk of colon cancer development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A simple model for indentation creep

    NASA Astrophysics Data System (ADS)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  9. Physical and optical properties of DCJTB dye for OLED display applications: Experimental and theoretical investigation

    NASA Astrophysics Data System (ADS)

    Kurban, Mustafa; Gündüz, Bayram

    2017-06-01

    In this study, 4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7-tetramethyljulolidin-4-yl-vinyl)-4H-pyran (DCJTB) was achieved using the experimental and theoretical studies. The electronic, optical and spectroscopic properties of DCJTB molecule were first investigated by performing experimental both solution and thin film techniques and then theoretical calculations. Theoretical results showed that one intense electronic transition is 505.26 nm a quite reasonable and agreement with the measured experimental data 505.00 and 503 nm with solution technique and film technique, respectively. Experimental and simple models were also taken into consideration to calculate the optical refractive index (n) of DCJTB molecule. The structural and electronic properties were next calculated using density functional theory (DFT) with B3LYP/6-311G (d, p) basis set. UV, FT-IR spectra characteristics and the electronic properties, such as frontier orbitals, and band gap energy (Eg) of DCJTB were also recorded time-dependent (TD) DFT approach. The theoretical Eg value were found to be 2.269 eV which is consistent with experimental results obtained from solution technique for THF solvent (2.155 eV) and literature (2.16 eV). The results herein obtained reveal that solution is simple, cost-efficient and safe for optoelectronic applications when compared with film technique.

  10. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  11. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  12. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive

  13. Noninvasive Tests Do Not Accurately Differentiate Nonalcoholic Steatohepatitis From Simple Steatosis: A Systematic Review and Meta-analysis.

    PubMed

    Verhaegh, Pauline; Bavalia, Roisin; Winkens, Bjorn; Masclee, Ad; Jonkers, Daisy; Koek, Ger

    2018-06-01

    Nonalcoholic fatty liver disease is a rapidly increasing health problem. Liver biopsy analysis is the most sensitive test to differentiate between nonalcoholic steatohepatitis (NASH) and simple steatosis (SS), but noninvasive methods are needed. We performed a systematic review and meta-analysis of noninvasive tests for differentiating NASH from SS, focusing on blood markers. We performed a systematic search of the PubMed, Medline and Embase (1990-2016) databases using defined keywords, limited to full-text papers in English and human adults, and identified 2608 articles. Two independent reviewers screened the articles and identified 122 eligible articles that used liver biopsy as reference standard. If at least 2 studies were available, pooled sensitivity (sens p ) and specificity (spec p ) values were determined using the Meta-Analysis Package for R (metafor). In the 122 studies analyzed, 219 different blood markers (107 single markers and 112 scoring systems) were identified to differentiate NASH from simple steatosis, and 22 other diagnostic tests were studied. Markers identified related to several pathophysiological mechanisms. The markers analyzed in the largest proportions of studies were alanine aminotransferase (sens p , 63.5% and spec p , 74.4%) within routine biochemical tests, adiponectin (sensp, 72.0% and spec p , 75.7%) within inflammatory markers, CK18-M30 (sens p , 68.4% and spec p , 74.2%) within markers of cell death or proliferation and homeostatic model assessment of insulin resistance (sens p , 69.0% and spec p , 72.7%) within the metabolic markers. Two scoring systems could also be pooled: the NASH test (differentiated NASH from borderline NASH plus simple steatosis with 22.9% sens p and 95.3% spec p ) and the GlycoNASH test (67.1% sens p and 63.8% spec p ). In the meta-analysis, we found no test to differentiate NASH from SS with a high level of pooled sensitivity and specificity (≥80%). However, some blood markers, when included in scoring

  14. Asymmetric simple exclusion process with position-dependent hopping rates: Phase diagram from boundary-layer analysis.

    PubMed

    Mukherji, Sutapa

    2018-03-01

    In this paper, we study a one-dimensional totally asymmetric simple exclusion process with position-dependent hopping rates. Under open boundary conditions, this system exhibits boundary-induced phase transitions in the steady state. Similarly to totally asymmetric simple exclusion processes with uniform hopping, the phase diagram consists of low-density, high-density, and maximal-current phases. In various phases, the shape of the average particle density profile across the lattice including its boundary-layer parts changes significantly. Using the tools of boundary-layer analysis, we obtain explicit solutions for the density profile in different phases. A detailed analysis of these solutions under different boundary conditions helps us obtain the equations for various phase boundaries. Next, we show how the shape of the entire density profile including the location of the boundary layers can be predicted from the fixed points of the differential equation describing the boundary layers. We discuss this in detail through several examples of density profiles in various phases. The maximal-current phase appears to be an especially interesting phase where the boundary layer flows to a bifurcation point on the fixed-point diagram.

  15. Asymmetric simple exclusion process with position-dependent hopping rates: Phase diagram from boundary-layer analysis

    NASA Astrophysics Data System (ADS)

    Mukherji, Sutapa

    2018-03-01

    In this paper, we study a one-dimensional totally asymmetric simple exclusion process with position-dependent hopping rates. Under open boundary conditions, this system exhibits boundary-induced phase transitions in the steady state. Similarly to totally asymmetric simple exclusion processes with uniform hopping, the phase diagram consists of low-density, high-density, and maximal-current phases. In various phases, the shape of the average particle density profile across the lattice including its boundary-layer parts changes significantly. Using the tools of boundary-layer analysis, we obtain explicit solutions for the density profile in different phases. A detailed analysis of these solutions under different boundary conditions helps us obtain the equations for various phase boundaries. Next, we show how the shape of the entire density profile including the location of the boundary layers can be predicted from the fixed points of the differential equation describing the boundary layers. We discuss this in detail through several examples of density profiles in various phases. The maximal-current phase appears to be an especially interesting phase where the boundary layer flows to a bifurcation point on the fixed-point diagram.

  16. Two-way ANOVA Problems with Simple Numbers.

    ERIC Educational Resources Information Center

    Read, K. L. Q.; Shihab, L. H.

    1998-01-01

    Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…

  17. Graph-Theoretic Analysis of Monomethyl Phosphate Clustering in Ionic Solutions.

    PubMed

    Han, Kyungreem; Venable, Richard M; Bryant, Anne-Marie; Legacy, Christopher J; Shen, Rong; Li, Hui; Roux, Benoît; Gericke, Arne; Pastor, Richard W

    2018-02-01

    All-atom molecular dynamics simulations combined with graph-theoretic analysis reveal that clustering of monomethyl phosphate dianion (MMP 2- ) is strongly influenced by the types and combinations of cations in the aqueous solution. Although Ca 2+ promotes the formation of stable and large MMP 2- clusters, K + alone does not. Nonetheless, clusters are larger and their link lifetimes are longer in mixtures of K + and Ca 2+ . This "synergistic" effect depends sensitively on the Lennard-Jones interaction parameters between Ca 2+ and the phosphorus oxygen and correlates with the hydration of the clusters. The pronounced MMP 2- clustering effect of Ca 2+ in the presence of K + is confirmed by Fourier transform infrared spectroscopy. The characterization of the cation-dependent clustering of MMP 2- provides a starting point for understanding cation-dependent clustering of phosphoinositides in cell membranes.

  18. Introduction, comparison, and validation of Meta‐Essentials: A free and simple tool for meta‐analysis

    PubMed Central

    van Rhee, Henk; Hak, Tony

    2017-01-01

    We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932

  19. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  20. Predominant information quality scheme for the essential amino acids: an information-theoretical analysis.

    PubMed

    Esquivel, Rodolfo O; Molina-Espíritu, Moyocoyani; López-Rosa, Sheila; Soriano-Correa, Catalina; Barrientos-Salcedo, Carolina; Kohout, Miroslav; Dehesa, Jesús S

    2015-08-24

    In this work we undertake a pioneer information-theoretical analysis of 18 selected amino acids extracted from a natural protein, bacteriorhodopsin (1C3W). The conformational structures of each amino acid are analyzed by use of various quantum chemistry methodologies at high levels of theory: HF, M062X and CISD(Full). The Shannon entropy, Fisher information and disequilibrium are determined to grasp the spatial spreading features of delocalizability, order and uniformity of the optimized structures. These three entropic measures uniquely characterize all amino acids through a predominant information-theoretic quality scheme (PIQS), which gathers all chemical families by means of three major spreading features: delocalization, narrowness and uniformity. This scheme recognizes four major chemical families: aliphatic (delocalized), aromatic (delocalized), electro-attractive (narrowed) and tiny (uniform). All chemical families recognized by the existing energy-based classifications are embraced by this entropic scheme. Finally, novel chemical patterns are shown in the information planes associated with the PIQS entropic measures. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. PLANS: A finite element program for nonlinear analysis of structures. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Pifko, A.; Levine, H. S.; Armen, H., Jr.

    1975-01-01

    The PLANS system is described which is a finite element program for nonlinear analysis. The system represents a collection of special purpose computer programs each associated with a distinct physical problem class. Modules of PLANS specifically referenced and described in detail include: (1) REVBY, for the plastic analysis of bodies of revolution; (2) OUT-OF-PLANE, for the plastic analysis of 3-D built-up structures where membrane effects are predominant; (3) BEND, for the plastic analysis of built-up structures where bending and membrane effects are significant; (4) HEX, for the 3-D elastic-plastic analysis of general solids; and (5) OUT-OF-PLANE-MG, for material and geometrically nonlinear analysis of built-up structures. The SATELLITE program for data debugging and plotting of input geometries is also described. The theoretical foundations upon which the analysis is based are presented. Discussed are the form of the governing equations, the methods of solution, plasticity theories available, a general system description and flow of the programs, and the elements available for use.

  2. Issues with the SIMPLE Model: Comment on Brown, Neath, and Chater (2007)

    ERIC Educational Resources Information Center

    Murdock, Bennet

    2008-01-01

    Comments on the article A temporal ratio model of memory by Brown, Neath, and Chater. SIMPLE (G. D. A. Brown, I. Neath, & N. Chater, 2007) attempts to explain data from serial recall and free recall in the same theoretical framework. While it can fit the free-recall serial-position curves that are the cornerstone of the 2-store buffer model, it…

  3. A simple method of fabricating mask-free microfluidic devices for biological analysis

    PubMed Central

    Yi, Xin; Kodzius, Rimantas; Gong, Xiuqing; Xiao, Kang; Wen, Weijia

    2010-01-01

    We report a simple, low-cost, rapid, and mask-free method to fabricate two-dimensional (2D) and three-dimensional (3D) microfluidic chip for biological analysis researches. In this fabrication process, a laser system is used to cut through paper to form intricate patterns and differently configured channels for specific purposes. Bonded with cyanoacrylate-based resin, the prepared paper sheet is sandwiched between glass slides (hydrophilic) or polymer-based plates (hydrophobic) to obtain a multilayer structure. In order to examine the chip’s biocompatibility and applicability, protein concentration was measured while DNA capillary electrophoresis was carried out, and both of them show positive results. With the utilization of direct laser cutting and one-step gas-sacrificing techniques, the whole fabrication processes for complicated 2D and 3D microfluidic devices are shorten into several minutes which make it a good alternative of poly(dimethylsiloxane) microfluidic chips used in biological analysis researches. PMID:20890452

  4. Simple Levelized Cost of Energy (LCOE) Calculator Documentation | Energy

    Science.gov Websites

    Analysis | NREL Simple Levelized Cost of Energy (LCOE) Calculator Documentation Simple Levelized Cost of Energy (LCOE) Calculator Documentation Transparent Cost Database Button This is a simple : 1). Cost and Performance Adjust the sliders to suitable values for each of the cost and performance

  5. Theoretical analysis, design and development of a 27-MHz folded loop antenna as a potential applicator in hyperthermia treatment.

    PubMed

    Kouloulias, Vassilis; Karanasiou, Irene; Giamalaki, Melina; Matsopoulos, George; Kouvaris, John; Kelekis, Nikolaos; Uzunoglu, Nikolaos

    2015-02-01

    A hyperthermia system using a folded loop antenna applicator at 27 MHz for soft tissue treatment was investigated both theoretically and experimentally to evaluate its clinical value. The electromagnetic analysis of a 27-MHz folded loop antenna for use in human tissue was based on a customised software tool and led to the design and development of the proposed hyperthermia system. The system was experimentally validated using specific absorption rate (SAR) distribution estimations through temperature distribution measurements of a muscle tissue phantom after electromagnetic exposure. Various scenarios for optimal antenna positioning were also performed. Comparison of the theoretical and experimental analysis results shows satisfactory agreement. The SAR level of 50% reaches 8 cm depth in the tissue phantom. Thus, based on the maximum observed SAR values that were of the order of 100 W/kg, the antenna specified is suitable for deep tumour heating. Theoretical and experimental SAR distribution results as derived from this study are in agreement. The proposed folded loop antenna seems appropriate for use in hyperthermia treatment, achieving proper planning and local treatment of deeply seated affected areas and lesions.

  6. Simple method to measure the refractive index of liquid with graduated cylinder and beaker.

    PubMed

    An, Yu-Kuan

    2017-12-01

    A simple method is introduced to measure the refractive index (RI) of a liquid with an experimental device composed of a graduated cylinder and a beaker which are coaxial. A magnified image of the graduated cylinder is formed as the liquid is poured into the beaker. Optical path analysis indicates that the RI of the liquid is equal to the product of the image's diameter magnification and the RI of air, irrelevant to the beaker. Theoretically, the RI measurement range is unlimited and the liquid dosage could be small as well. The device is used to carry out experiments by means of both the photographic method and telescope method to measure RIs of three kinds of liquids. The results show that the measured RIs all fit their published values well.

  7. Simple method to measure the refractive index of liquid with graduated cylinder and beaker

    NASA Astrophysics Data System (ADS)

    An, Yu-Kuan

    2017-12-01

    A simple method is introduced to measure the refractive index (RI) of a liquid with an experimental device composed of a graduated cylinder and a beaker which are coaxial. A magnified image of the graduated cylinder is formed as the liquid is poured into the beaker. Optical path analysis indicates that the RI of the liquid is equal to the product of the image's diameter magnification and the RI of air, irrelevant to the beaker. Theoretically, the RI measurement range is unlimited and the liquid dosage could be small as well. The device is used to carry out experiments by means of both the photographic method and telescope method to measure RIs of three kinds of liquids. The results show that the measured RIs all fit their published values well.

  8. Optimal design of an activated sludge plant: theoretical analysis

    NASA Astrophysics Data System (ADS)

    Islam, M. A.; Amin, M. S. A.; Hoinkis, J.

    2013-06-01

    The design procedure of an activated sludge plant consisting of an activated sludge reactor and settling tank has been theoretically analyzed assuming that (1) the Monod equation completely describes the growth kinetics of microorganisms causing the degradation of biodegradable pollutants and (2) the settling characteristics are fully described by a power law. For a given reactor height, the design parameter of the reactor (reactor volume) is reduced to the reactor area. Then the sum total area of the reactor and the settling tank is expressed as a function of activated sludge concentration X and the recycled ratio α. A procedure has been developed to calculate X opt, for which the total required area of the plant is minimum for given microbiological system and recycled ratio. Mathematical relations have been derived to calculate the α-range in which X opt meets the requirements of F/ M ratio. Results of the analysis have been illustrated for varying X and α. Mathematical formulae have been proposed to recalculate the recycled ratio in the events, when the influent parameters differ from those assumed in the design.

  9. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    PubMed

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  10. Analysis system for characterisation of simple, low-cost microfluidic components

    NASA Astrophysics Data System (ADS)

    Smith, Suzanne; Naidoo, Thegaran; Nxumalo, Zandile; Land, Kevin; Davies, Emlyn; Fourie, Louis; Marais, Philip; Roux, Pieter

    2014-06-01

    There is an inherent trade-off between cost and operational integrity of microfluidic components, especially when intended for use in point-of-care devices. We present an analysis system developed to characterise microfluidic components for performing blood cell counting, enabling the balance between function and cost to be established quantitatively. Microfluidic components for sample and reagent introduction, mixing and dispensing of fluids were investigated. A simple inlet port plugging mechanism is used to introduce and dispense a sample of blood, while a reagent is released into the microfluidic system through compression and bursting of a blister pack. Mixing and dispensing of the sample and reagent are facilitated via air actuation. For these microfluidic components to be implemented successfully, a number of aspects need to be characterised for development of an integrated point-of-care device design. The functional components were measured using a microfluidic component analysis system established in-house. Experiments were carried out to determine: 1. the force and speed requirements for sample inlet port plugging and blister pack compression and release using two linear actuators and load cells for plugging the inlet port, compressing the blister pack, and subsequently measuring the resulting forces exerted, 2. the accuracy and repeatability of total volumes of sample and reagent dispensed, and 3. the degree of mixing and dispensing uniformity of the sample and reagent for cell counting analysis. A programmable syringe pump was used for air actuation to facilitate mixing and dispensing of the sample and reagent. Two high speed cameras formed part of the analysis system and allowed for visualisation of the fluidic operations within the microfluidic device. Additional quantitative measures such as microscopy were also used to assess mixing and dilution accuracy, as well as uniformity of fluid dispensing - all of which are important requirements towards the

  11. Binocular disparities, motion parallax, and geometric perspective in Patrick Hughes's 'reverspectives': theoretical analysis and empirical findings.

    PubMed

    Rogers, Brian; Gyani, Alex

    2010-01-01

    Abstract. Patrick Hughes's 'reverspective' artworks provide a novel way of investigating the effectiveness of different sources of 3-D information for the human visual system. Our empirical findings show that the converging lines of simple linear perspective can be as effective as the rich array of 3-D cues present in natural scenes in determining what we see, even when these cues are in conflict with binocular disparities. Theoretical considerations reveal that, once the information provided by motion parallax transformations is correctly understood, there is no need to invoke higher-level processes or an interpretation based on familiarity or past experience in order to explain either the 'reversed' depth or the apparent, concomitant rotation of a reverspective artwork as the observer moves from side to side. What we see in reverspectives is the most likely real-world scenario (distal stimulus) that could have created the perspective and parallax transformations (proximal stimulus) that stimulate our visual systems.

  12. Theoretical and experimental analysis of a multiphase screw pump, handling gas-liquid mixtures with very high gas volume fractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raebiger, K.; Faculty of Advanced Technology, University of Glamorgan, Pontypridd, Wales; Maksoud, T.M.A.

    In the investigation of the pumping behaviour of multiphase screw pumps, handling gas-liquid mixtures with very high gas volume fractions, theoretical and experimental analyses were performed. A new theoretical screw pump model was developed, which calculates the time-dependent conditions inside the several chambers of a screw pump as well as the exchange of mass and energy between these chambers. By means of the performed experimental analysis, the screw pump model was verified, especially at very high gas volume fractions from 90% to 99%. The experiments, which were conducted with the reference fluids water and air, can be divided mainly intomore » the determination of the steady state pumping behaviour on the one hand and into the analysis of selected transient operating conditions on the other hand, whereas the visualisation of the leakage flows through the circumferential gaps was rounded off the experimental analysis. (author)« less

  13. A simple apparatus for quick qualitative analysis of CR39 nuclear track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gautier, D. C.; Kline, J. L.; Flippo, K. A.

    2008-10-15

    Quantifying the ion pits in Columbia Resin 39 (CR39) nuclear track detector from Thomson parabolas is a time consuming and tedious process using conventional microscope based techniques. A simple inventive apparatus for fast screening and qualitative analysis of CR39 detectors has been developed, enabling efficient selection of data for a more detailed analysis. The system consists simply of a green He-Ne laser and a high-resolution digital single-lens reflex camera. The laser illuminates the edge of the CR39 at grazing incidence and couples into the plastic, acting as a light pipe. Subsequently, the laser illuminates all ion tracks on the surface.more » A high-resolution digital camera is used to photograph the scattered light from the ion tracks, enabling one to quickly determine charge states and energies measured by the Thomson parabola.« less

  14. Mode-coupling theoretical analysis of transport and relaxation properties of liquid dimethylimidazolium chloride

    NASA Astrophysics Data System (ADS)

    Yamaguchi, T.; Koda, S.

    2010-03-01

    The mode-coupling theory for molecular liquids based on the interaction-site model is applied to a representative molecular ionic liquid, dimethylimidazolium chloride, and dynamic properties such as shear viscosity, self-diffusion coefficients, reorientational relaxation time, electric conductivity, and dielectric relaxation spectrum are analyzed. Molecular dynamics (MD) simulation is also performed on the same system for comparison. The theory captures the characteristics of the dynamics of the ionic liquid qualitatively, although theoretical relaxation times are several times larger than those from the MD simulation. Large relaxations are found in the 100 MHz region in the dispersion of the shear viscosity and the dielectric relaxation, in harmony with various experiments. The relaxations of the self-diffusion coefficients are also found in the same frequency region. The dielectric relaxation spectrum is divided into the contributions of the translational and reorientational modes, and it is demonstrated that the relaxation in the 100 MHz region mainly stems from the translational modes. The zero-frequency electric conductivity is close to the value predicted by the Nernst-Einstein equation in both MD simulation and theoretical calculation. However, the frequency dependence of the electric conductivity is different from those of self-diffusion coefficients in that the former is smaller than the latter in the gigahertz-terahertz region, which is compensated by the smaller dispersion of the former in the 100 MHz region. The analysis of the theoretical calculation shows that the difference in their frequency dependence is due to the different contribution of the short- and long-range liquid structures.

  15. A Detection-Theoretic Model of Echo Inhibition

    ERIC Educational Resources Information Center

    Saberi, Kourosh; Petrosyan, Agavni

    2004-01-01

    A detection-theoretic analysis of the auditory localization of dual-impulse stimuli is described, and a model for the processing of spatial cues in the echo pulse is developed. Although for over 50 years "echo suppression" has been the topic of intense theoretical and empirical study within the hearing sciences, only a rudimentary understanding of…

  16. X-ray peak profile analysis of zinc oxide nanoparticles formed by simple precipitation method

    NASA Astrophysics Data System (ADS)

    Pelicano, Christian Mark; Rapadas, Nick Joaquin; Magdaluyo, Eduardo

    2017-12-01

    Zinc oxide (ZnO) nanoparticles were successfully synthesized by a simple precipitation method using zinc acetate and tetramethylammonium hydroxide. The synthesized ZnO nanoparticles were characterized by X-ray Diffraction analysis (XRD) and Transmission Electron Microscopy (TEM). The XRD result revealed a hexagonal wurtzite structure for the ZnO nanoparticles. The TEM image showed spherical nanoparticles with an average crystallite size of 6.70 nm. For x-ray peak analysis, Williamson-Hall (W-H) and Size-Strain Plot (SSP) methods were applied to examine the effects of crystallite size and lattice strain on the peak broadening of the ZnO nanoparticles. Based on the calculations, the estimated crystallite sizes and lattice strains obtained are in good agreement with each other.

  17. Theoretical results for starved elliptical contacts

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Dowson, D.

    1983-01-01

    Eighteen cases were used in the theoretical study of the influence of lubricant starvation on film thickness and pressure in elliptical elastohydrodynamic conjunctions. From the results a simple and important critical dimensionless inlet boundary distance at which lubricant starvation becomes significant was specified. This inlet boundary distance defines whether a fully flooded or a starved condition exists in the contact. Furthermore, it was found that the film thickness for a starved condition is written in dimensionless terms as a function of the inlet distance parameter and the film thickness for a fully flooded condition. Contour plots of pressure and film thickness in and around the contact are shown for fully flooded and starved conditions.

  18. Representing general theoretical concepts in structural equation models: The role of composite variables

    USGS Publications Warehouse

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  19. Determining the best treatment for simple bone cyst: a decision analysis.

    PubMed

    Lee, Seung Yeol; Chung, Chin Youb; Lee, Kyoung Min; Sung, Ki Hyuk; Won, Sung Hun; Choi, In Ho; Cho, Tae-Joon; Yoo, Won Joon; Yeo, Ji Hyun; Park, Moon Seok

    2014-03-01

    The treatment of simple bone cysts (SBC) in children varies significantly among physicians. This study examined which procedure is better for the treatment of SBC, using a decision analysis based on current published evidence. A decision tree focused on five treatment modalities of SBC (observation, steroid injection, autologous bone marrow injection, decompression, and curettage with bone graft) were created. Each treatment modality was further branched, according to the presence and severity of complications. The probabilities of all cases were obtained by literature review. A roll back tool was utilized to determine the most preferred treatment modality. One-way sensitivity analysis was performed to determine the threshold value of the treatment modalities. Two-way sensitivity analysis was utilized to examine the joint impact of changes in probabilities of two parameters. The decision model favored autologous bone marrow injection. The expected value of autologous bone marrow injection was 0.9445, while those of observation, steroid injection, decompression, and curettage and bone graft were 0.9318, 0.9400, 0.9395, and 0.9342, respectively. One-way sensitivity analysis showed that autologous bone marrow injection was better than that of decompression for the expected value when the rate of pathologic fracture, or positive symptoms of SBC after autologous bone marrow injection, was lower than 20.4%. In our study, autologous bone marrow injection was found to be the best choice of treatment of SBC. However, the results were sensitive to the rate of pathologic fracture after treatment of SBC. Physicians should consider the possibility of pathologic fracture when they determine a treatment method for SBC.

  20. Aerodynamic design and analysis system for supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1975-01-01

    An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.

  1. Designing novel Sn-Bi, Si-C and Ge-C nanostructures, using simple theoretical chemical similarities

    NASA Astrophysics Data System (ADS)

    Zdetsis, Aristides D.

    2011-04-01

    A framework of simple, transparent and powerful concepts is presented which is based on isoelectronic (or isovalent) principles, analogies, regularities and similarities. These analogies could be considered as conceptual extensions of the periodical table of the elements, assuming that two atoms or molecules having the same number of valence electrons would be expected to have similar or homologous properties. In addition, such similar moieties should be able, in principle, to replace each other in more complex structures and nanocomposites. This is only partly true and only occurs under certain conditions which are investigated and reviewed here. When successful, these concepts are very powerful and transparent, leading to a large variety of nanomaterials based on Si and other group 14 elements, similar to well known and well studied analogous materials based on boron and carbon. Such nanomaterias designed in silico include, among many others, Si-C, Sn-Bi, Si-C and Ge-C clusters, rings, nanowheels, nanorodes, nanocages and multidecker sandwiches, as well as silicon planar rings and fullerenes similar to the analogous sp2 bonding carbon structures. It is shown that this pedagogically simple and transparent framework can lead to an endless variety of novel and functional nanomaterials with important potential applications in nanotechnology, nanomedicine and nanobiology. Some of the so called predicted structures have been already synthesized, not necessarily with the same rational and motivation. Finally, it is anticipated that such powerful and transparent rules and analogies, in addition to their predictive power, could also lead to far-reaching interpretations and a deeper understanding of already known results and information.

  2. Designing novel Sn-Bi, Si-C and Ge-C nanostructures, using simple theoretical chemical similarities.

    PubMed

    Zdetsis, Aristides D

    2011-04-27

    A framework of simple, transparent and powerful concepts is presented which is based on isoelectronic (or isovalent) principles, analogies, regularities and similarities. These analogies could be considered as conceptual extensions of the periodical table of the elements, assuming that two atoms or molecules having the same number of valence electrons would be expected to have similar or homologous properties. In addition, such similar moieties should be able, in principle, to replace each other in more complex structures and nanocomposites. This is only partly true and only occurs under certain conditions which are investigated and reviewed here. When successful, these concepts are very powerful and transparent, leading to a large variety of nanomaterials based on Si and other group 14 elements, similar to well known and well studied analogous materials based on boron and carbon. Such nanomaterias designed in silico include, among many others, Si-C, Sn-Bi, Si-C and Ge-C clusters, rings, nanowheels, nanorodes, nanocages and multidecker sandwiches, as well as silicon planar rings and fullerenes similar to the analogous sp2 bonding carbon structures. It is shown that this pedagogically simple and transparent framework can lead to an endless variety of novel and functional nanomaterials with important potential applications in nanotechnology, nanomedicine and nanobiology. Some of the so called predicted structures have been already synthesized, not necessarily with the same rational and motivation. Finally, it is anticipated that such powerful and transparent rules and analogies, in addition to their predictive power, could also lead to far-reaching interpretations and a deeper understanding of already known results and information.

  3. The theoretical limit to plant productivity.

    PubMed

    DeLucia, Evan H; Gomez-Casanovas, Nuria; Greenberg, Jonathan A; Hudiburg, Tara W; Kantola, Ilsa B; Long, Stephen P; Miller, Adam D; Ort, Donald R; Parton, William J

    2014-08-19

    Human population and economic growth are accelerating the demand for plant biomass to provide food, fuel, and fiber. The annual increment of biomass to meet these needs is quantified as net primary production (NPP). Here we show that an underlying assumption in some current models may lead to underestimates of the potential production from managed landscapes, particularly of bioenergy crops that have low nitrogen requirements. Using a simple light-use efficiency model and the theoretical maximum efficiency with which plant canopies convert solar radiation to biomass, we provide an upper-envelope NPP unconstrained by resource limitations. This theoretical maximum NPP approached 200 tC ha(-1) yr(-1) at point locations, roughly 2 orders of magnitude higher than most current managed or natural ecosystems. Recalculating the upper envelope estimate of NPP limited by available water reduced it by half or more in 91% of the land area globally. While the high conversion efficiencies observed in some extant plants indicate great potential to increase crop yields without changes to the basic mechanism of photosynthesis, particularly for crops with low nitrogen requirements, realizing such high yields will require improvements in water use efficiency.

  4. Theoretical analysis and experimental investigation on performance of the thermal shield of accelerator cryomodules by thermo-siphon cooling of liquid nitrogen

    NASA Astrophysics Data System (ADS)

    Datta, T. S.; Kar, S.; Kumar, M.; Choudhury, A.; Chacko, J.; Antony, J.; Babu, S.; Sahu, S. K.

    2015-12-01

    Five beam line cryomodules with total 27 superconducting Radio Frequency (RF) cavities are installed and commissioned at IUAC to enhance the energy of heavy ion from 15 UD Pelletron. To reduce the heat load at 4.2 K, liquid nitrogen (LN2) cooled intermediate thermal shield is used for all these cryomodules. For three linac cryomodules, concept of forced flow LN2 cooling is used and for superbuncher and rebuncher, thermo-siphon cooling is incorporated. It is noticed that the shield temperature of superbuncher varies from 90 K to 110 K with respect to liquid nitrogen level. The temperature difference can't be explained by using the basic concept of thermo-siphon with the heat load on up flow line. A simple thermo-siphon experimental set up is developed to simulate the thermal shield temperature profile. Mass flow rate of liquid nitrogen is measured with different heat load on up flow line for different liquid levels. It is noticed that small amount of heat load on down flow line have a significant effect on mass flow rate. The present paper will be investigating the data generated from the thermosiphon experimental set up and a theoretical analysis will be presented here to validate the measured temperature profile of the cryomodule shield.

  5. Anticipatory Cognitive Systems: a Theoretical Model

    NASA Astrophysics Data System (ADS)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  6. Analysis of simple 2-D and 3-D metal structures subjected to fragment impact

    NASA Technical Reports Server (NTRS)

    Witmer, E. A.; Stagliano, T. R.; Spilker, R. L.; Rodal, J. J. A.

    1977-01-01

    Theoretical methods were developed for predicting the large-deflection elastic-plastic transient structural responses of metal containment or deflector (C/D) structures to cope with rotor burst fragment impact attack. For two-dimensional C/D structures both, finite element and finite difference analysis methods were employed to analyze structural response produced by either prescribed transient loads or fragment impact. For the latter category, two time-wise step-by-step analysis procedures were devised to predict the structural responses resulting from a succession of fragment impacts: the collision force method (CFM) which utilizes an approximate prediction of the force applied to the attacked structure during fragment impact, and the collision imparted velocity method (CIVM) in which the impact-induced velocity increment acquired by a region of the impacted structure near the impact point is computed. The merits and limitations of these approaches are discussed. For the analysis of 3-d responses of C/D structures, only the CIVM approach was investigated.

  7. Theoretical Sum Frequency Generation Spectroscopy of Peptides

    PubMed Central

    2015-01-01

    Vibrational sum frequency generation (SFG) has become a very promising technique for the study of proteins at interfaces, and it has been applied to important systems such as anti-microbial peptides, ion channel proteins, and human islet amyloid polypeptide. Moreover, so-called “chiral” SFG techniques, which rely on polarization combinations that generate strong signals primarily for chiral molecules, have proven to be particularly discriminatory of protein secondary structure. In this work, we present a theoretical strategy for calculating protein amide I SFG spectra by combining line-shape theory with molecular dynamics simulations. We then apply this method to three model peptides, demonstrating the existence of a significant chiral SFG signal for peptides with chiral centers, and providing a framework for interpreting the results on the basis of the dependence of the SFG signal on the peptide orientation. We also examine the importance of dynamical and coupling effects. Finally, we suggest a simple method for determining a chromophore’s orientation relative to the surface using ratios of experimental heterodyne-detected signals with different polarizations, and test this method using theoretical spectra. PMID:25203677

  8. From double-slit interference to structural information in simple hydrocarbons

    PubMed Central

    Kushawaha, Rajesh Kumar; Patanen, Minna; Guillemin, Renaud; Journel, Loic; Miron, Catalin; Simon, Marc; Piancastelli, Maria Novella; Skates, C.; Decleva, Piero

    2013-01-01

    Interferences in coherent emission of photoelectrons from two equivalent atomic centers in a molecule are the microscopic analogies of the celebrated Young’s double-slit experiment. By considering inner-valence shell ionization in the series of simple hydrocarbons C2H2, C2H4, and C2H6, we show that double-slit interference is widespread and has built-in quantitative information on geometry, orbital composition, and many-body effects. A theoretical and experimental study is presented over the photon energy range of 70–700 eV. A strong dependence of the oscillation period on the C–C distance is observed, which can be used to determine bond lengths between selected pairs of equivalent atoms with an accuracy of at least 0.01 Å. Furthermore, we show that the observed oscillations are directly informative of the nature and atomic composition of the inner-valence molecular orbitals and that observed ratios are quantitative measures of elusive many-body effects. The technique and analysis can be immediately extended to a large class of compounds. PMID:24003155

  9. Theoretical analysis of sheet metal formability

    NASA Astrophysics Data System (ADS)

    Zhu, Xinhai

    Sheet metal forming processes are among the most important metal-working operations. These processes account for a sizable proportion of manufactured goods made in industrialized countries each year. Furthermore, to reduce the cost and increase the performance of manufactured products, in addition to the environmental concern, more and more light weight and high strength materials have been used as a substitute to the conventional steel. These materials usually have limited formability, thus, a thorough understanding of the deformation processes and the factors limiting the forming of sound parts is important, not only from a scientific or engineering viewpoint, but also from an economic point of view. An extensive review of previous studies pertaining to theoretical analyses of Forming Limit Diagrams (FLDs) is contained in Chapter I. A numerical model to analyze the neck evolution process is outlined in Chapter II. With the use of strain gradient theory, the effect of initial defect profile on the necking process is analyzed. In the third chapter, the method proposed by Storen and Rice is adopted to analyze the initiation of localized neck and predict the corresponding FLDs. In view of the fact that the width of the localized neck is narrow, the deformation inside the neck region is constrained by the material in the neighboring homogeneous region. The relative rotation effect may then be assumed to be small and is thus neglected. In Chapter IV, Hill's 1948 yield criterion and strain gradient theory are employed to obtain FLDs, for planar anisotropic sheet materials by using bifurcation analysis. The effects of the strain gradient coefficient c and the material anisotropic parameters R's on the orientation of the neck and FLDs are analyzed in a systematic manner and compared with experiments. In Chapter V, Hill's 79 non-quadratic yield criterion with a deformation theory of plasticity is used along with bifurcation analyses to derive a general analytical

  10. Estimation of interhemispheric dynamics from simple unimanual reaction time to extrafoveal stimuli.

    PubMed

    Braun, C M

    1992-12-01

    This essay reviews research on interhemispheric transfer time derived from simple unimanual reaction time to hemitachistoscopically presented visual stimuli. Part 1 reviews major theoretical themes including (a) the significance of the eccentricity effect on interhemispheric transfer time in the context of proposed underlying neurohistological constraints; (b) the significance of gender differences in interhemispheric transfer time and findings in dyslexics and left-handers in the context of a fetal brain testosterone model; and (c) the significance of complexity effects on interhemispheric transfer time in a context of "dynamic" vs. "hard-wired" concepts of the underlying interhemispheric communication systems. Part 2 consists of a meta-analysis of 49 published behavioral experiments, in view of drawing a portrait of the best set of experimental conditions apt to produce salient, reliable, and statistically significant measures of interhemispheric transfer time, namely (a) index rather than thumb response, (b) low rather than high target luminance, (c) short rather than prolonged target display, and (d) very eccentric rather than near-foveal stimulus location. Part 3 proposes a theoretical model of interhemispheric transfer time, postulating the measurable existence of fast and slow interhemispheric channels. The proposed mechanism's evolutionary adaptive value, the neurophysiological evidence in its support, and favorable functional evidence from studies of callosotomized patients are then presented followed by proposals for critical experimental tests of the model.

  11. Simple preparation of plant epidermal tissue for laser microdissection and downstream quantitative proteome and carbohydrate analysis.

    PubMed

    Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A

    2015-01-01

    The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape - liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant-microbe interaction with their potential outreach into crop breeding.

  12. Simple preparation of plant epidermal tissue for laser microdissection and downstream quantitative proteome and carbohydrate analysis

    PubMed Central

    Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A.

    2015-01-01

    The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape – liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant–microbe interaction with their potential outreach into crop breeding. PMID:25870605

  13. Wave cybernetics: A simple model of wave-controlled nonlinear and nonlocal cooperative phenomena

    NASA Astrophysics Data System (ADS)

    Yasue, Kunio

    1988-09-01

    A simple theoretical description of nonlinear and nonlocal cooperative phenomena is presented in which the global control mechanism of the whole system is given by the tuned-wave propagation. It provides us with an interesting universal scheme of systematization in physical and biological systems called wave cybernetics, and may be understood as a model realizing Bohm's idea of implicate order in natural philosophy.

  14. SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.

    PubMed

    Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B

    2016-02-04

    Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with

  15. Synthesis, spectroscopic analysis and theoretical study of new pyrrole-isoxazoline derivatives

    NASA Astrophysics Data System (ADS)

    Rawat, Poonam; Singh, R. N.; Baboo, Vikas; Niranjan, Priydarshni; Rani, Himanshu; Saxena, Rajat; Ahmad, Sartaj

    2017-02-01

    In the present work, we have efficiently synthesized the pyrrole-isoxazoline derivatives (4a-d) by cyclization of substituted 4-chalconylpyrrole (3a-d) with hydroxylamine hydrochloride. The reactivity of substituted 4-chalconylpyrrole (3a-d), towards nucleophiles hydroxylamine hydrochloride was evaluated on the basis of electrophilic reactivity descriptors (fk+, sk+, ωk+) and they were found to be high at unsaturated β carbon of chalconylpyrrole indicating its more proneness to nucleophilic attack and thereby favoring the formation of reported new pyrrole-isoxazoline compounds (4a-d). The structures of newly synthesized pyrrole-isoxazoline derivatives were derived from IR, 1H NMR, Mass, UV-Vis and elemental analysis. All experimental spectral data corroborate well with the calculated spectral data. The FT-IR analysis shows red shifts in vN-H and vC = O stretching due to dimer formation through intermolecular hydrogen bonding. On basis set superposition error correction, the intermolecular interaction energy for (4a-d) is found to be 10.10, 9.99, 10.18, 11.01 and 11.19 kcal/mol respectively. The calculated first hyperpolarizability (β0) values of (4a-d) molecules are in the range of 7.40-9.05 × 10-30 esu indicating their suitability for non-linear optical (NLO) applications. Experimental spectral results, theoretical data, analysis of chalcone intermediates and pyrrole-isoxazolines find usefulness in advancement of pyrrole-azole chemistry.

  16. Understanding Confounding Effects in Linguistic Coordination: An Information-Theoretic Approach

    PubMed Central

    Gao, Shuyang; Ver Steeg, Greg; Galstyan, Aram

    2015-01-01

    We suggest an information-theoretic approach for measuring stylistic coordination in dialogues. The proposed measure has a simple predictive interpretation and can account for various confounding factors through proper conditioning. We revisit some of the previous studies that reported strong signatures of stylistic accommodation, and find that a significant part of the observed coordination can be attributed to a simple confounding effect—length coordination. Specifically, longer utterances tend to be followed by longer responses, which gives rise to spurious correlations in the other stylistic features. We propose a test to distinguish correlations in length due to contextual factors (topic of conversation, user verbosity, etc.) and turn-by-turn coordination. We also suggest a test to identify whether stylistic coordination persists even after accounting for length coordination and contextual factors. PMID:26115446

  17. Theoretical development and first-principles analysis of strongly correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chen

    A variety of quantum many-body methods have been developed for studying the strongly correlated electron systems. We have also proposed a computationally efficient and accurate approach, named the correlation matrix renormalization (CMR) method, to address the challenges. The initial implementation of the CMR method is designed for molecules which have theoretical advantages, including small size of system, manifest mechanism and strongly correlation effect such as bond breaking process. The theoretic development and benchmark tests of the CMR method are included in this thesis. Meanwhile, ground state total energy is the most important property of electronic calculations. We also investigated anmore » alternative approach to calculate the total energy, and extended this method for magnetic anisotropy energy (MAE) of ferromagnetic materials. In addition, another theoretical tool, dynamical mean- field theory (DMFT) on top of the DFT , has also been used in electronic structure calculations for an Iridium oxide to study the phase transition, which results from an interplay of the d electrons' internal degrees of freedom.« less

  18. Comprehensive analysis of Arabidopsis expression level polymorphisms with simple inheritance

    PubMed Central

    Plantegenet, Stephanie; Weber, Johann; Goldstein, Darlene R; Zeller, Georg; Nussbaumer, Cindy; Thomas, Jérôme; Weigel, Detlef; Harshman, Keith; Hardtke, Christian S

    2009-01-01

    In Arabidopsis thaliana, gene expression level polymorphisms (ELPs) between natural accessions that exhibit simple, single locus inheritance are promising quantitative trait locus (QTL) candidates to explain phenotypic variability. It is assumed that such ELPs overwhelmingly represent regulatory element polymorphisms. However, comprehensive genome-wide analyses linking expression level, regulatory sequence and gene structure variation are missing, preventing definite verification of this assumption. Here, we analyzed ELPs observed between the Eil-0 and Lc-0 accessions. Compared with non-variable controls, 5′ regulatory sequence variation in the corresponding genes is indeed increased. However, ∼42% of all the ELP genes also carry major transcription unit deletions in one parent as revealed by genome tiling arrays, representing a >4-fold enrichment over controls. Within the subset of ELPs with simple inheritance, this proportion is even higher and deletions are generally more severe. Similar results were obtained from analyses of the Bay-0 and Sha accessions, using alternative technical approaches. Collectively, our results suggest that drastic structural changes are a major cause for ELPs with simple inheritance, corroborating experimentally observed indel preponderance in cloned Arabidopsis QTL. PMID:19225455

  19. Random sequential adsorption of straight rigid rods on a simple cubic lattice

    NASA Astrophysics Data System (ADS)

    García, G. D.; Sanchez-Varretti, F. O.; Centres, P. M.; Ramirez-Pastor, A. J.

    2015-10-01

    Random sequential adsorption of straight rigid rods of length k (k-mers) on a simple cubic lattice has been studied by numerical simulations and finite-size scaling analysis. The k-mers were irreversibly and isotropically deposited into the lattice. The calculations were performed by using a new theoretical scheme, whose accuracy was verified by comparison with rigorous analytical data. The results, obtained for k ranging from 2 to 64, revealed that (i) the jamming coverage for dimers (k = 2) is θj = 0.918388(16) . Our result corrects the previously reported value of θj = 0.799(2) (Tarasevich and Cherkasova, 2007); (ii) θj exhibits a decreasing function when it is plotted in terms of the k-mer size, being θj(∞) = 0.4045(19) the value of the limit coverage for large k's; and (iii) the ratio between percolation threshold and jamming coverage shows a non-universal behavior, monotonically decreasing to zero with increasing k.

  20. Theoretical Analysis and Bench Tests of a Control-Surface Booster Employing a Variable Displacement Hydraulic Pump

    NASA Technical Reports Server (NTRS)

    Mathews, Charles W.; Kleckner, Harold F.

    1947-01-01

    The NACA is conducting a general investigation of servo-mechanisms for use in powering aircraft control surfaces. This paper presents a theoretical analysis and the results of bench tests of a control-booster system which employs a variable displacement hydraulic pump. The booster is intended for use in a flight investigation to determine the effects of various booster parameters on the handling qualities of airplanes. Such a flight investigation would aid in formulating specific requirements concerning the design of control boosters in general. Results of the theoretical analysis and the bench tests indicate that the subject booster is representative of types which show promise of satisfactory performance. The bench tests showed that the following desirable features were inherent in this booster system: (1) No lost motion or play in any part of the system; (2) no detectable lag between motion of the contra1 stick and control surface; and (3) Good agreement between control displacements and stick-force variations with no hysteresis in the stick-force characteristics. The final design configuration of this booster system showed no tendency to oscillate, overshoot, or have other undesirable transient characteristics common to boosters.

  1. Critical Analysis of the Mathematical Formalism of Theoretical Physics. I. Foundations of Differential and Integral Calculus

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2013-04-01

    Critical analysis of the standard foundations of differential and integral calculus -- as mathematical formalism of theoretical physics -- is proposed. Methodological basis of the analysis is the unity of formal logic and rational dialectics. It is shown that: (a) the foundations (i.e. d 1ptyd,;=;δ,;->;0,;δ,δ,, δ,;->;0;δ,δ,;=;δ,;->;0;f,( x;+;δ, );-;f,( x )δ,;, d,;=;δ,, d,;=;δ, where y;=;f,( x ) is a continuous function of one argument x; δ, and δ, are increments; d, and d, are differentials) not satisfy formal logic law -- the law of identity; (b) the infinitesimal quantities d,, d, are fictitious quantities. They have neither algebraic meaning, nor geometrical meaning because these quantities do not take numerical values and, therefore, have no a quantitative measure; (c) expressions of the kind x;+;d, are erroneous because x (i.e. finite quantity) and d, (i.e. infinitely diminished quantity) have different sense, different qualitative determinacy; since x;,;,,,,onst under δ,;,;,, a derivative does not contain variable quantity x and depends only on constant c. Consequently, the standard concepts ``infinitesimal quantity (uninterruptedly diminishing quantity)'', ``derivative'', ``derivative as function of variable quantity'' represent incorrect basis of mathematics and theoretical physics.

  2. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  3. Procedures for experimental measurement and theoretical analysis of large plastic deformations

    NASA Technical Reports Server (NTRS)

    Morris, R. E.

    1974-01-01

    Theoretical equations are derived and analytical procedures are presented for the interpretation of experimental measurements of large plastic strains in the surface of a plate. Orthogonal gage lengths established on the metal surface are measured before and after deformation. The change in orthogonality after deformation is also measured. Equations yield the principal strains, deviatoric stresses in the absence of surface friction forces, true stresses if the stress normal to the surface is known, and the orientation angle between the deformed gage line and the principal stress-strain axes. Errors in the measurement of nominal strains greater than 3 percent are within engineering accuracy. Applications suggested for this strain measurement system include the large-strain-stress analysis of impact test models, burst tests of spherical or cylindrical pressure vessels, and to augment small-strain instrumentation tests where large strains are anticipated.

  4. Theoretical analysis of low-power fast optogenetic control of firing of Chronos-expressing neurons.

    PubMed

    Saran, Sant; Gupta, Neha; Roy, Sukhdev

    2018-04-01

    A detailed theoretical analysis of low-power, fast optogenetic control of firing of Chronos-expressing neurons has been presented. A three-state model for the Chronos photocycle has been formulated and incorporated in a fast-spiking interneuron circuit model. The effect of excitation wavelength, pulse irradiance, pulse width, and pulse frequency has been studied in detail and compared with ChR2. Theoretical simulations are in excellent agreement with recently reported experimental results and bring out additional interesting features. At very low irradiances ([Formula: see text]), the plateau current in Chronos exhibits a maximum. At [Formula: see text], the plateau current is 2 orders of magnitude smaller and saturates at longer pulse widths ([Formula: see text]) compared to ChR2 ([Formula: see text]). [Formula: see text] in Chronos saturates at much shorter pulse widths (1775 pA at 1.5 ms and [Formula: see text]) than in ChR2. Spiking fidelity is also higher at lower irradiances and longer pulse widths compared to ChR2. Chronos exhibits an average maximal driven rate of over [Formula: see text] in response to [Formula: see text] stimuli, each of 1-ms pulse-width, in the intensity range 0 to [Formula: see text]. The analysis is important to not only understand the photodynamics of Chronos and Chronos-expressing neurons but also to design opsins with optimized properties and perform precision experiments with required spatiotemporal resolution.

  5. Graph Theoretical Analysis, In Silico Modeling, Synthesis, Anti-Microbial and Anti-TB Evaluation of Novel Quinoxaline Derivatives.

    PubMed

    Saravanan, Govindaraj; Selvam, Theivendren Panneer; Alagarsamy, Veerachamy; Kunjiappan, Selvaraj; Joshi, Shrinivas D; Indhumathy, Murugan; Kumar, Pandurangan Dinesh

    2018-05-01

    We designed to synthesize a number of 2-(2-(substituted benzylidene) hydrazinyl)-N-(4-((3-(phenyl imino)-3,4-dihydro quinoxalin-2(1 H)-ylidene)amino) phenyl) acetamide S1-S13: with the hope to obtain more active and less toxic anti-microbial and anti-TB agents. A series of novel quinoxaline Schiff bases S1-S13: were synthesized from o-phenylenediamine and oxalic acid by a multistep synthesis. In present work, we are introducing graph theoretical analysis to identify drug target. In the connection of graph theoretical analysis, we utilised KEGG database and Cytoscape software. All the title compounds were evaluated for their in-vitro anti-microbial activity by using agar well diffusion method at three different concentration levels (50, 100 and 150 µg/ml). The MIC of the compounds was also determined by agar streak dilution method. The identified study report through graph theoretical analysis were highlights that the key virulence factor for pathogenic mycobacteria is a eukaryotic-like serine/threonine protein kinase, termed PknG. All compounds were found to display significant activity against entire tested bacteria and fungi. In addition the synthesized scaffolds were screened for their in vitro antituberculosis (anti-TB) activity against Mycobacterium tuberculosis (Mtb) strain H 37 Ra using standard drug Rifampicin. A number of analogs found markedly potent anti-microbial and anti-TB activity. The relationship between the functional group variation and the biological activity of the evaluated compounds were well discussed. The observed study report was showing that the compound S6: (4-nitro substitution) exhibited most potent effective anti-microbial and anti-TB activity out of various tested compounds. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Theoretical and experimental examination of SFG polarization analysis at acetonitrile-water solution surfaces.

    PubMed

    Saito, Kengo; Peng, Qiling; Qiao, Lin; Wang, Lin; Joutsuka, Tatsuya; Ishiyama, Tatsuya; Ye, Shen; Morita, Akihiro

    2017-03-29

    Sum frequency generation (SFG) spectroscopy is widely used to observe molecular orientation at interfaces through a combination of various types of polarization. The present work thoroughly examines the relation between the polarization dependence of SFG signals and the molecular orientation, by comparing SFG measurements and molecular dynamics (MD) simulations of acetonitrile/water solutions. The present SFG experiment and MD simulations yield quite consistent results on the ratios of χ (2) elements, supporting the reliability of both means. However, the subsequent polarization analysis tends to derive more upright tilt angles of acetonitrile than the direct MD calculations. The reasons for discrepancy are examined in terms of three issues; (i) anisotropy of the Raman tensor, (ii) cross-correlation, and (iii) orientational distribution. The analysis revealed that the issues (i) and (iii) are the main causes of errors in the conventional polarization analysis of SFG spectra. In methyl CH stretching, the anisotropy of Raman tensor cannot be estimated from the simple bond polarizability model. The neglect of the orientational distribution is shown to systematically underestimate the tilt angle of acetonitrile. Further refined use of polarization analysis in collaboration with MD simulations should be proposed.

  7. Experimental and Theoretical Modal Analysis of Full-Sized Wood Composite Panels Supported on Four Nodes

    PubMed Central

    Guan, Cheng; Zhang, Houjiang; Wang, Xiping; Miao, Hu; Zhou, Lujing; Liu, Fenglu

    2017-01-01

    Key elastic properties of full-sized wood composite panels (WCPs) must be accurately determined not only for safety, but also serviceability demands. In this study, the modal parameters of full-sized WCPs supported on four nodes were analyzed for determining the modulus of elasticity (E) in both major and minor axes, as well as the in-plane shear modulus of panels by using a vibration testing method. The experimental modal analysis was conducted on three full-sized medium-density fiberboard (MDF) and three full-sized particleboard (PB) panels of three different thicknesses (12, 15, and 18 mm). The natural frequencies and mode shapes of the first nine modes of vibration were determined. Results from experimental modal testing were compared with the results of a theoretical modal analysis. A sensitivity analysis was performed to identify the sensitive modes for calculating E (major axis: Ex and minor axis: Ey) and the in-plane shear modulus (Gxy) of the panels. Mode shapes of the MDF and PB panels obtained from modal testing are in a good agreement with those from theoretical modal analyses. A strong linear relationship exists between the measured natural frequencies and the calculated frequencies. The frequencies of modes (2, 0), (0, 2), and (2, 1) under the four-node support condition were determined as the characteristic frequencies for calculation of Ex, Ey, and Gxy of full-sized WCPs. The results of this study indicate that the four-node support can be used in free vibration test to determine the elastic properties of full-sized WCPs. PMID:28773043

  8. Combined Theoretical and Experimental Study of Refractive Indices of Water-Acetonitrile-Salt Systems.

    PubMed

    An, Ni; Zhuang, Bilin; Li, Minglun; Lu, Yuyuan; Wang, Zhen-Gang

    2015-08-20

    We propose a simple theoretical formula for describing the refractive indices in binary liquid mixtures containing salt ions. Our theory is based on the Clausius-Mossotti equation; it gives the refractive index of the mixture in terms of the refractive indices of the pure liquids and the polarizability of the ionic species, by properly accounting for the volume change upon mixing. The theoretical predictions are tested by extensive experimental measurements of the refractive indices for water-acetonitrile-salt systems for several liquid compositions, different salt species, and a range of salt concentrations. Excellent agreement is obtained in all cases, especially at low salt concentrations, with no fitting parameters. A simplified expression of the refractive index for low salt concentration is also given, which can be the theoretical basis for determination of salt concentration using refractive index measurements.

  9. Personality is of central concern to understand health: towards a theoretical model for health psychology

    PubMed Central

    Ferguson, Eamonn

    2013-01-01

    This paper sets out the case that personality traits are central to health psychology. To achieve this, three aims need to be addressed. First, it is necessary to show that personality influences a broad range of health outcomes and mechanisms. Second, the simple descriptive account of Aim 1 is not sufficient, and a theoretical specification needs to be developed to explain the personality-health link and allow for future hypothesis generation. Third, once Aims 1 and 2 are met, it is necessary to demonstrate the clinical utility of personality. In this review I make the case that all three Aims are met. I develop a theoretical framework to understand the links between personality and health drawing on current theorising in the biology, evolution, and neuroscience of personality. I identify traits (i.e., alexithymia, Type D, hypochondriasis, and empathy) that are of particular concern to health psychology and set these within evolutionary cost-benefit analysis. The literature is reviewed within a three-level hierarchical model (individual, group, and organisational) and it is argued that health psychology needs to move from its traditional focus on the individual level to engage group and organisational levels. PMID:23772230

  10. Simple prediction method of lumbar lordosis for planning of lumbar corrective surgery: radiological analysis in a Korean population.

    PubMed

    Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee

    2014-01-01

    This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.

  11. On the (In)Validity of Tests of Simple Mediation: Threats and Solutions

    PubMed Central

    Pek, Jolynn; Hoyle, Rick H.

    2015-01-01

    Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice. PMID:26985234

  12. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  13. Analysis of poetic literature using B. F. Skinner's theoretical framework from verbal behavior

    PubMed Central

    Luke, Nicole M.

    2003-01-01

    This paper examines Skinner's work on verbal behavior in the context of literature as a particular class of written verbal behavior. It looks at contemporary literary theory and analysis and the contributions that Skinner's theoretical framework can make. Two diverse examples of poetic literature are chosen and analyzed following Skinner's framework, examining the dynamic interplay between the writer and reader that take place within the bounds of the work presented. It is concluded that Skinner's hypotheses about verbal behavior and the functional approach to understanding it have much to offer literary theorists in their efforts to understand literary works and should be more carefully examined.

  14. Theoretical analysis of the distribution of isolated particles in totally asymmetric exclusion processes: Application to mRNA translation rate estimation

    NASA Astrophysics Data System (ADS)

    Dao Duc, Khanh; Saleem, Zain H.; Song, Yun S.

    2018-01-01

    The Totally Asymmetric Exclusion Process (TASEP) is a classical stochastic model for describing the transport of interacting particles, such as ribosomes moving along the messenger ribonucleic acid (mRNA) during translation. Although this model has been widely studied in the past, the extent of collision between particles and the average distance between a particle to its nearest neighbor have not been quantified explicitly. We provide here a theoretical analysis of such quantities via the distribution of isolated particles. In the classical form of the model in which each particle occupies only a single site, we obtain an exact analytic solution using the matrix ansatz. We then employ a refined mean-field approach to extend the analysis to a generalized TASEP with particles of an arbitrary size. Our theoretical study has direct applications in mRNA translation and the interpretation of experimental ribosome profiling data. In particular, our analysis of data from Saccharomyces cerevisiae suggests a potential bias against the detection of nearby ribosomes with a gap distance of less than approximately three codons, which leads to some ambiguity in estimating the initiation rate and protein production flux for a substantial fraction of genes. Despite such ambiguity, however, we demonstrate theoretically that the interference rate associated with collisions can be robustly estimated and show that approximately 1% of the translating ribosomes get obstructed.

  15. Evolution Analysis of Simple Sequence Repeats in Plant Genome.

    PubMed

    Qin, Zhen; Wang, Yanping; Wang, Qingmei; Li, Aixian; Hou, Fuyun; Zhang, Liming

    2015-01-01

    Simple sequence repeats (SSRs) are widespread units on genome sequences, and play many important roles in plants. In order to reveal the evolution of plant genomes, we investigated the evolutionary regularities of SSRs during the evolution of plant species and the plant kingdom by analysis of twelve sequenced plant genome sequences. First, in the twelve studied plant genomes, the main SSRs were those which contain repeats of 1-3 nucleotides combination. Second, in mononucleotide SSRs, the A/T percentage gradually increased along with the evolution of plants (except for P. patens). With the increase of SSRs repeat number the percentage of A/T in C. reinhardtii had no significant change, while the percentage of A/T in terrestrial plants species gradually declined. Third, in dinucleotide SSRs, the percentage of AT/TA increased along with the evolution of plant kingdom and the repeat number increased in terrestrial plants species. This trend was more obvious in dicotyledon than monocotyledon. The percentage of CG/GC showed the opposite pattern to the AT/TA. Forth, in trinucleotide SSRs, the percentages of combinations including two or three A/T were in a rising trend along with the evolution of plant kingdom; meanwhile with the increase of SSRs repeat number in plants species, different species chose different combinations as dominant SSRs. SSRs in C. reinhardtii, P. patens, Z. mays and A. thaliana showed their specific patterns related to evolutionary position or specific changes of genome sequences. The results showed that, SSRs not only had the general pattern in the evolution of plant kingdom, but also were associated with the evolution of the specific genome sequence. The study of the evolutionary regularities of SSRs provided new insights for the analysis of the plant genome evolution.

  16. An effectiveness analysis of healthcare systems using a systems theoretic approach.

    PubMed

    Chuang, Sheuwen; Inder, Kerry

    2009-10-24

    The use of accreditation and quality measurement and reporting to improve healthcare quality and patient safety has been widespread across many countries. A review of the literature reveals no association between the accreditation system and the quality measurement and reporting systems, even when hospital compliance with these systems is satisfactory. Improvement of health care outcomes needs to be based on an appreciation of the whole system that contributes to those outcomes. The research literature currently lacks an appropriate analysis and is fragmented among activities. This paper aims to propose an integrated research model of these two systems and to demonstrate the usefulness of the resulting model for strategic research planning. To achieve these aims, a systematic integration of the healthcare accreditation and quality measurement/reporting systems is structured hierarchically. A holistic systems relationship model of the administration segment is developed to act as an investigation framework. A literature-based empirical study is used to validate the proposed relationships derived from the model. Australian experiences are used as evidence for the system effectiveness analysis and design base for an adaptive-control study proposal to show the usefulness of the system model for guiding strategic research. Three basic relationships were revealed and validated from the research literature. The systemic weaknesses of the accreditation system and quality measurement/reporting system from a system flow perspective were examined. The approach provides a system thinking structure to assist the design of quality improvement strategies. The proposed model discovers a fourth implicit relationship, a feedback between quality performance reporting components and choice of accreditation components that is likely to play an important role in health care outcomes. An example involving accreditation surveyors is developed that provides a systematic search for

  17. Teachers' Stances on Cell Phones in the ESL Classroom: Toward a "Theoretical" Framework

    ERIC Educational Resources Information Center

    Brown, Jeff

    2014-01-01

    In the ongoing and constantly expanding discussion surrounding cell phones in the classroom, a theoretical complement to the practical side of the issue is generally lacking. This is perhaps understandable. Many teachers are still trying to deal with the simple presence of cell phones in the class, and managing a classroom in which the presence…

  18. Quantitative theoretical analysis of lifetimes and decay rates relevant in laser cooling BaH

    NASA Astrophysics Data System (ADS)

    Moore, Keith; Lane, Ian C.

    2018-05-01

    Tiny radiative losses below the 0.1% level can prove ruinous to the effective laser cooling of a molecule. In this paper the laser cooling of a hydride is studied with rovibronic detail using ab initio quantum chemistry in order to document the decays to all possible electronic states (not just the vibrational branching within a single electronic transition) and to identify the most populated final quantum states. The effect of spin-orbit and associated couplings on the properties of the lowest excited states of BaH are analysed in detail. The lifetimes of the A2Π1/2, H2Δ3/2 and E2Π1/2 states are calculated (136 ns, 5.8 μs and 46 ns respectively) for the first time, while the theoretical value for B2 Σ1/2+ is in good agreement with experiments. Using a simple rate model the numbers of absorption-emission cycles possible for both one- and two-colour cooling on the competing electronic transitions are determined, and it is clearly demonstrated that the A2Π - X2Σ+ transition is superior to B2Σ+ - X2Σ+ , where multiple tiny decay channels degrade its efficiency. Further possible improvements to the cooling method are proposed.

  19. The biomechanics of simple steatosis and steatohepatitis

    NASA Astrophysics Data System (ADS)

    Parker, K. J.; Ormachea, J.; Drage, M. G.; Kim, H.; Hah, Z.

    2018-05-01

    Magnetic resonance and ultrasound elastography techniques are now important tools for staging high-grade fibrosis in patients with chronic liver disease. However, uncertainty remains about the effects of simple accumulation of fat (steatosis) and inflammation (steatohepatitis) on the parameters that can be measured using different elastographic techniques. To address this, we examine the rheological models that are capable of capturing the dominant viscoelastic behaviors associated with fat and inflammation in the liver, and quantify the resulting changes in shear wave speed and viscoelastic parameters. Theoretical results are shown to match measurements in phantoms and animal studies reported in the literature. These results are useful for better design of elastographic studies of fatty liver disease and steatohepatitis, potentially leading to improved diagnosis of these conditions.

  20. Simple Electrolyzer Model Development for High-Temperature Electrolysis System Analysis Using Solid Oxide Electrolysis Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JaeHwa Koh; DuckJoo Yoon; Chang H. Oh

    2010-07-01

    An electrolyzer model for the analysis of a hydrogen-production system using a solid oxide electrolysis cell (SOEC) has been developed, and the effects for principal parameters have been estimated by sensitivity studies based on the developed model. The main parameters considered are current density, area specific resistance, temperature, pressure, and molar fraction and flow rates in the inlet and outlet. Finally, a simple model for a high-temperature hydrogen-production system using the solid oxide electrolysis cell integrated with very high temperature reactors is estimated.

  1. Nanoscale simple-fluid behavior under steady shear.

    PubMed

    Yong, Xin; Zhang, Lucy T

    2012-05-01

    In this study, we use two nonequilibrium molecular dynamics algorithms, boundary-driven shear and homogeneous shear, to explore the rheology and flow properties of a simple fluid undergoing steady simple shear. The two distinct algorithms are designed to elucidate the influences of nanoscale confinement. The results of rheological material functions, i.e., viscosity and normal pressure differences, show consistent Newtonian behaviors at low shear rates from both systems. The comparison validates that confinements of the order of 10 nm are not strong enough to deviate the simple fluid behaviors from the continuum hydrodynamics. The non-Newtonian phenomena of the simple fluid are further investigated by the homogeneous shear simulations with much higher shear rates. We observe the "string phase" at high shear rates by applying both profile-biased and profile-unbiased thermostats. Contrary to other findings where the string phase is found to be an artifact of the thermostats, we perform a thorough analysis of the fluid microstructures formed due to shear, which shows that it is possible to have a string phase and second shear thinning for dense simple fluids.

  2. A gauge-theoretic approach to gravity.

    PubMed

    Krasnov, Kirill

    2012-08-08

    Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.

  3. A gauge-theoretic approach to gravity

    PubMed Central

    Krasnov, Kirill

    2012-01-01

    Einstein's general relativity (GR) is a dynamical theory of the space–time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang–Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach. PMID:22792040

  4. On the Hilbert-Huang Transform Theoretical Foundation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Huang, Norden E.

    2004-01-01

    The Hilbert-Huang Transform [HHT] is a novel empirical method for spectrum analysis of non-linear and non-stationary signals. The HHT is a recent development and much remains to be done to establish the theoretical foundation of the HHT algorithms. This paper develops the theoretical foundation for the convergence of the HHT sifting algorithm and it proves that the finest spectrum scale will always be the first generated by the HHT Empirical Mode Decomposition (EMD) algorithm. The theoretical foundation for cutting an extrema data points set into two parts is also developed. This then allows parallel signal processing for the HHT computationally complex sifting algorithm and its optimization in hardware.

  5. Klein's Plan B in the Early Teaching of Analysis: Two Theoretical Cases of Exploring Mathematical Links

    ERIC Educational Resources Information Center

    Kondratieva, Margo; Winsløw, Carl

    2018-01-01

    We present a theoretical approach to the problem of the transition from Calculus to Analysis within the undergraduate mathematics curriculum. First, we formulate this problem using the anthropological theory of the didactic, in particular the notion of praxeology, along with a possible solution related to Klein's "Plan B": here,…

  6. Extended physics as a theoretical framework for systems biology?

    PubMed

    Miquel, Paul-Antoine

    2011-08-01

    In this essay we examine whether a theoretical and conceptual framework for systems biology could be built from the Bailly and Longo (2008, 2009) proposal. These authors aim to understand life as a coherent critical structure, and propose to develop an extended physical approach of evolution, as a diffusion of biomass in a space of complexity. Their attempt leads to a simple mathematical reconstruction of Gould's assumption (1989) concerning the bacterial world as a "left wall of least complexity" that we will examine. Extended physical systems are characterized by their constructive properties. Time is acting and new properties emerge by their history that can open the list of their initial properties. This conceptual and theoretical framework is nothing more than a philosophical assumption, but as such it provides a new and exciting approach concerning the evolution of life, and the transition between physics and biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Role of information theoretic uncertainty relations in quantum theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz; ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin; Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk

    2015-04-15

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again,more » improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.« less

  8. Item Selection, Evaluation, and Simple Structure in Personality Data

    PubMed Central

    Pettersson, Erik; Turkheimer, Eric

    2010-01-01

    We report an investigation of the genesis and interpretation of simple structure in personality data using two very different self-reported data sets. The first consists of a set of relatively unselected lexical descriptors, whereas the second is based on responses to a carefully constructed instrument. In both data sets, we explore the degree of simple structure by comparing factor solutions to solutions from simulated data constructed to have either strong or weak simple structure. The analysis demonstrates that there is little evidence of simple structure in the unselected items, and a moderate degree among the selected items. In both instruments, however, much of the simple structure that could be observed originated in a strong dimension of positive vs. negative evaluation. PMID:20694168

  9. Net motion of acoustically levitating nano-particles: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Lippera, Kevin; Dauchot, Olivier; Benzaquen, Michael; Gulliver-LadHyX Collaboration

    2017-11-01

    A particle 2D-trapped in the nodal planed of a standing acoustic wave is prone to acoustic-phoretic motion as soon as its shape breaks polar or chiral symmetry. such a setup constitues an ideal system to study boundaryless 2D collective behavior with purely hydrodynamic long range interactions. Recent studies have indeed shown that quasi-spherical particles may undergo net propulsion, a feature partially understood theoretically in the particular case of infinite viscous boundary layers. We here extend the theoretical results of to any boundary layer thickness, by that meeting typical experimental conditions. In addition, we propose an explanation for the net spinning of the trapped particles, as observed in experiments.

  10. Brain network of semantic integration in sentence reading: insights from independent component analysis and graph theoretical analysis.

    PubMed

    Ye, Zheng; Doñamayor, Nuria; Münte, Thomas F

    2014-02-01

    A set of cortical and sub-cortical brain structures has been linked with sentence-level semantic processes. However, it remains unclear how these brain regions are organized to support the semantic integration of a word into sentential context. To look into this issue, we conducted a functional magnetic resonance imaging (fMRI) study that required participants to silently read sentences with semantically congruent or incongruent endings and analyzed the network properties of the brain with two approaches, independent component analysis (ICA) and graph theoretical analysis (GTA). The GTA suggested that the whole-brain network is topologically stable across conditions. The ICA revealed a network comprising the supplementary motor area (SMA), left inferior frontal gyrus, left middle temporal gyrus, left caudate nucleus, and left angular gyrus, which was modulated by the incongruity of sentence ending. Furthermore, the GTA specified that the connections between the left SMA and left caudate nucleus as well as that between the left caudate nucleus and right thalamus were stronger in response to incongruent vs. congruent endings. Copyright © 2012 Wiley Periodicals, Inc.

  11. Synthesis, NMR characterization, and a simple application of lithium borotritide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Than, Chit; Morimoto, Hiromi; Andres, H.

    1996-12-13

    LiBH{sub 4} is a powerful and selective reagent for regiospecific reduction reactions. A simple synthesis of LiB{sup 3}H{sub 4} at near theoretical specific radioactivity is reported. The authors have treated Li{sup 3}H synthesized from tritium gas ({sup 3}H{sub 2}, {approximately}98%) with BBr{sub 3} to produce LiB{sup 3}H{sub 4} (specific activity = 4120 GBq/mmol = 110 Ci/mmol. The maximum theoretical specific activity of LiB{sup 3}H{sub 4} is 4252 GBq/mmol = 115.04 Ci/mmol; 1 matom of {sup 3}H = 1063 GBq = 28.76 Ci.) The tritium labeling performance of the reagent was tested by an exemplary reduction of 2-naphthaldehyde to 2-naphthalenemethanol. LiB{supmore » 3}H{sub 4} and the reduction products were characterized by a combination of {sup 1}H, {sup 3}H, and {sup 11}B NMR techniques, as appropriate. 35 refs., 1 fig.« less

  12. Theoretical Analysis of the Longitudinal Behavior of an Automatically Controlled Supersonic Interceptor During the Attack Phase

    NASA Technical Reports Server (NTRS)

    Gates, Ordway B., Jr.; Woodling, C. H.

    1959-01-01

    Theoretical analysis of the longitudinal behavior of an automatically controlled supersonic interceptor during the attack phase against a nonmaneuvering target is presented. Control of the interceptor's flight path is obtained by use of a pitch rate command system. Topics lift, and pitching moment, effects of initial tracking errors, discussion of normal acceleration limited, limitations of control surface rate and deflection, and effects of neglecting forward velocity changes of interceptor during attack phase.

  13. Direct power comparisons between simple LOD scores and NPL scores for linkage analysis in complex diseases.

    PubMed

    Abreu, P C; Greenberg, D A; Hodge, S E

    1999-09-01

    Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.

  14. A simple pendulum-based measurement of g with a smartphone light sensor

    NASA Astrophysics Data System (ADS)

    Pili, Unofre; Violanda, Renante

    2018-07-01

    A quick and very accessible method for the measurement of acceleration due to gravity is presented. The experimental set-up employs a smartphone ambient light sensor as the motion timer for measuring the period of a simple pendulum. This allowed us to obtain an experimental value, 9.72  +  0.05 m s‑2, for the gravitational acceleration which is in good agreement with the local theoretical value of 9.78 m s‑2.

  15. A simple polyol-free synthesis route to Gd2O3 nanoparticles for MRI applications: an experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Ahrén, Maria; Selegård, Linnéa; Söderlind, Fredrik; Linares, Mathieu; Kauczor, Joanna; Norman, Patrick; Käll, Per-Olov; Uvdal, Kajsa

    2012-08-01

    Chelated gadolinium ions, e.g., Gd-DTPA, are today used clinically as contrast agents for magnetic resonance imaging (MRI). An attractive alternative contrast agent is composed of gadolinium oxide nanoparticles as they have shown to provide enhanced contrast and, in principle, more straightforward molecular capping possibilities. In this study, we report a new, simple, and polyol-free way of synthesizing 4-5-nm-sized Gd2O3 nanoparticles at room temperature, with high stability and water solubility. The nanoparticles induce high-proton relaxivity compared to Gd-DTPA showing r 1 and r 2 values almost as high as those for free Gd3+ ions in water. The Gd2O3 nanoparticles are capped with acetate and carbonate groups, as shown with infrared spectroscopy, near-edge X-ray absorption spectroscopy, X-ray photoelectron spectroscopy and combined thermogravimetric and mass spectroscopy analysis. Interpretation of infrared spectroscopy data is corroborated by extensive quantum chemical calculations. This nanomaterial is easily prepared and has promising properties to function as a core in a future contrast agent for MRI.

  16. Information-Theoretical Analysis of EEG Microstate Sequences in Python.

    PubMed

    von Wegner, Frederic; Laufs, Helmut

    2018-01-01

    We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG) measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A-D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings.

  17. Combined theoretical and experimental analysis of processes determining cathode performance in solid oxide fuel cells.

    PubMed

    Kuklja, M M; Kotomin, E A; Merkle, R; Mastrikov, Yu A; Maier, J

    2013-04-21

    Solid oxide fuel cells (SOFC) are under intensive investigation since the 1980's as these devices open the way for ecologically clean direct conversion of the chemical energy into electricity, avoiding the efficiency limitation by Carnot's cycle for thermochemical conversion. However, the practical development of SOFC faces a number of unresolved fundamental problems, in particular concerning the kinetics of the electrode reactions, especially oxygen reduction reaction. We review recent experimental and theoretical achievements in the current understanding of the cathode performance by exploring and comparing mostly three materials: (La,Sr)MnO3 (LSM), (La,Sr)(Co,Fe)O3 (LSCF) and (Ba,Sr)(Co,Fe)O3 (BSCF). Special attention is paid to a critical evaluation of advantages and disadvantages of BSCF, which shows the best cathode kinetics known so far for oxides. We demonstrate that it is the combined experimental and theoretical analysis of all major elementary steps of the oxygen reduction reaction which allows us to predict the rate determining steps for a given material under specific operational conditions and thus control and improve SOFC performance.

  18. Teenage Pregnancy: A Theoretical Analysis of a Social Problem.

    ERIC Educational Resources Information Center

    Davis, Richard A.

    1989-01-01

    Broadly outlines scope of the problem of teenage pregnancy, some of its more obvious causes, and some of the long-term implications of not truly understanding the nature of the problem. Concludes with theoretical critique of social disorganizational, social definitional, and social organizational approaches to the problem of teenage pregnancy.…

  19. The Challenge of Self-Directed and Self-Regulated Learning in Vocational Education: A Theoretical Analysis and Synthesis of Requirements

    ERIC Educational Resources Information Center

    Jossberger, Helen; Brand-Gruwel, Saskia; Boshuizen, Henny; van de Wiel, Margje

    2010-01-01

    Workplace simulations (WPS), authentic learning environments at school, are increasingly used in vocational education. This article provides a theoretical analysis and synthesis of requirements considering learner skills, characteristics of the learning environment and the role of the teacher that influence good functioning in WPS and foster…

  20. Simple biophysical model of tumor evasion from immune system control

    NASA Astrophysics Data System (ADS)

    D'Onofrio, Alberto; Ciancio, Armando

    2011-09-01

    The competitive nonlinear interplay between a tumor and the host's immune system is not only very complex but is also time-changing. A fundamental aspect of this issue is the ability of the tumor to slowly carry out processes that gradually allow it to become less harmed and less susceptible to recognition by the immune system effectors. Here we propose a simple epigenetic escape mechanism that adaptively depends on the interactions per time unit between cells of the two systems. From a biological point of view, our model is based on the concept that a tumor cell that has survived an encounter with a cytotoxic T-lymphocyte (CTL) has an information gain that it transmits to the other cells of the neoplasm. The consequence of this information increase is a decrease in both the probabilities of being killed and of being recognized by a CTL. We show that the mathematical model of this mechanism is formally equal to an evolutionary imitation game dynamics. Numerical simulations of transitory phases complement the theoretical analysis. Implications of the interplay between the above mechanisms and the delivery of immunotherapies are also illustrated.

  1. Flying high: a theoretical analysis of the factors limiting exercise performance in birds at altitude.

    PubMed

    Scott, Graham R; Milsom, William K

    2006-11-01

    The ability of some bird species to fly at extreme altitude has fascinated comparative respiratory physiologists for decades, yet there is still no consensus about what adaptations enable high altitude flight. Using a theoretical model of O(2) transport, we performed a sensitivity analysis of the factors that might limit exercise performance in birds. We found that the influence of individual physiological traits on oxygen consumption (Vo2) during exercise differed between sea level, moderate altitude, and extreme altitude. At extreme altitude, haemoglobin (Hb) O(2) affinity, total ventilation, and tissue diffusion capacity for O(2) (D(To2)) had the greatest influences on Vo2; increasing these variables should therefore have the greatest adaptive benefit for high altitude flight. There was a beneficial interaction between D(To2) and the P(50) of Hb, such that increasing D(To2) had a greater influence on Vo2 when P(50) was low. Increases in the temperature effect on P(50) could also be beneficial for high flying birds, provided that cold inspired air at extreme altitude causes a substantial difference in temperature between blood in the lungs and in the tissues. Changes in lung diffusion capacity for O(2), cardiac output, blood Hb concentration, the Bohr coefficient, or the Hill coefficient likely have less adaptive significance at high altitude. Our sensitivity analysis provides theoretical suggestions of the adaptations most likely to promote high altitude flight in birds and provides direction for future in vivo studies.

  2. Fast and Simple Discriminative Analysis of Anthocyanins-Containing Berries Using LC/MS Spectral Data.

    PubMed

    Yang, Heejung; Kim, Hyun Woo; Kwon, Yong Soo; Kim, Ho Kyong; Sung, Sang Hyun

    2017-09-01

    Anthocyanins are potent antioxidant agents that protect against many degenerative diseases; however, they are unstable because they are vulnerable to external stimuli including temperature, pH and light. This vulnerability hinders the quality control of anthocyanin-containing berries using classical high-performance liquid chromatography (HPLC) analytical methodologies based on UV or MS chromatograms. To develop an alternative approach for the quality assessment and discrimination of anthocyanin-containing berries, we used MS spectral data acquired in a short analytical time rather than UV or MS chromatograms. Mixtures of anthocyanins were separated from other components in a short gradient time (5 min) due to their higher polarity, and the representative MS spectrum was acquired from the MS chromatogram corresponding to the mixture of anthocyanins. The chemometric data from the representative MS spectra contained reliable information for the identification and relative quantification of anthocyanins in berries with good precision and accuracy. This fast and simple methodology, which consists of a simple sample preparation method and short gradient analysis, could be applied to reliably discriminate the species and geographical origins of different anthocyanin-containing berries. These features make the technique useful for the food industry. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A simple, objective analysis scheme for scatterometer data. [Seasat A satellite observation of wind over ocean

    NASA Technical Reports Server (NTRS)

    Levy, G.; Brown, R. A.

    1986-01-01

    A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.

  4. Nonlocal approach to the analysis of the stress distribution in granular systems. I. Theoretical framework

    NASA Astrophysics Data System (ADS)

    Kenkre, V. M.; Scott, J. E.; Pease, E. A.; Hurd, A. J.

    1998-05-01

    A theoretical framework for the analysis of the stress distribution in granular materials is presented. It makes use of a transformation of the vertical spatial coordinate into a formal time variable and the subsequent study of a generally non-Markoffian, i.e., memory-possessing (nonlocal) propagation equation. Previous treatments are obtained as particular cases corresponding to, respectively, wavelike and diffusive limits of the general evolution. Calculations are presented for stress propagation in bounded and unbounded media. They can be used to obtain desired features such as a prescribed stress distribution within the compact.

  5. Theoretical analysis of rotating two phase detonation in a rocket motor

    NASA Technical Reports Server (NTRS)

    Shen, I.; Adamson, T. C., Jr.

    1973-01-01

    Tangential mode, non-linear wave motion in a liquid propellant rocket engine is studied, using a two phase detonation wave as the reaction model. Because the detonation wave is followed immediately by expansion waves, due to the side relief in the axial direction, it is a Chapman-Jouguet wave. The strength of this wave, which may be characterized by the pressure ratio across the wave, as well as the wave speed and the local wave Mach number, are related to design parameters such as the contraction ratio, chamber speed of sound, chamber diameter, propellant injection density and velocity, and the specific heat ratio of the burned gases. In addition, the distribution of flow properties along the injector face can be computed. Numerical calculations show favorable comparison with experimental findings. Finally, the effects of drop size are discussed and a simple criterion is found to set the lower limit of validity of this strong wave analysis.

  6. Determinants of outcomes in patients with simple gastroschisis.

    PubMed

    Youssef, Fouad; Laberge, Jean-Martin; Puligandla, Pramod; Emil, Sherif

    2017-05-01

    We analyzed the determinants of outcomes in simple gastroschisis (GS) not complicated by intestinal atresia, perforation, or necrosis. All simple GS patients enrolled in a national prospective registry from 2005 to 2013 were studied. Patients below the median for total parenteral nutrition (TPN) duration (26days) and hospital stay (34days) were compared to those above. Univariate and multivariate logistic and linear regression analyses were employed using maternal, patient, postnatal, and treatment variables. Of 700 patients with simple GS, representing 76.8% of all GS patients, 690 (98.6%) survived. TPN was used in 352 (51.6%) and 330 (48.4%) patients for ≤26 and >26days, respectively. Hospital stay for 356 (51.9%) and 330 (48.1%) infants was ≤34 and >34days, respectively. Univariate analysis revealed significant differences in several patient, treatment, and postnatal factors. On multivariate analysis, prenatal sonographic bowel dilation, older age at closure, necrotizing enterocolitis, longer mechanical ventilation, and central-line associated blood stream infection (CLABSI) were independently associated with longer TPN duration and hospital stay, with CLABSI being the strongest predictor. Prenatal bowel dilation is associated with increased morbidity in simple GS. CLABSI is the strongest predictor of outcomes. Bowel matting is not an independent risk factor. 2c. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Non-suicidal self-injury and life stress: A systematic meta-analysis and theoretical elaboration

    PubMed Central

    Liu, Richard T.; Cheek, Shayna M.; Nestor, Bridget A.

    2016-01-01

    Recent years have seen a considerable growth of interest in the study of life stress and non-suicidal self-injury (NSSI). The current article presents a systematic review of the empirical literature on this association. In addition to providing a comprehensive meta-analysis, the current article includes a qualitative review of the findings for which there were too few cases (i.e., < 3) for reliable approximations of effect sizes. Across the studies included in the meta-analysis, a significant but modest relation between life stress and NSSI was found (pooled OR = 1.81 [95% CI = 1.49–2.21]). After an adjustment was made for publication bias, the estimated effect size was smaller but still significant (pooled OR = 1.33 [95% CI = 1.08–1.63]). This relation was moderated by sample type, NSSI measure type, and length of period covered by the NSSI measure. The empirical literature is characterized by several methodological limitations, particularly the frequent use of cross-sectional analyses involving temporal overlap between assessments of life stress and NSSI, leaving unclear the precise nature of the relation between these two phenomena (e.g., whether life stress may be a cause, concomitant, or consequence of NSSI). Theoretically informed research utilizing multi-wave designs, assessing life stress and NSSI over relatively brief intervals, and featuring interview-based assessments of these constructs holds promise for advancing our understanding of their relation. The current review concludes with a theoretical elaboration of the association between NSSI and life stress, with the aim of providing a conceptual framework to guide future study in this area. PMID:27267345

  8. Infantilism: Theoretical Construct and Operationalization

    ERIC Educational Resources Information Center

    Sabelnikova, Y. V.; Khmeleva, N. L.

    2018-01-01

    The aim of this article is to define and operationalize the construct of infantilism. The methods of theoretical research involve analysis and synthesis. Age and content criteria are analyzed for childhood and adulthood. Infantile traits in an adult are described. Results: The characteristics of adult infantilism in the modern world are defined,…

  9. Macroscopic Spatial Complexity of the Game of Life Cellular Automaton: A Simple Data Analysis

    NASA Astrophysics Data System (ADS)

    Hernández-Montoya, A. R.; Coronel-Brizio, H. F.; Rodríguez-Achach, M. E.

    In this chapter we present a simple data analysis of an ensemble of 20 time series, generated by averaging the spatial positions of the living cells for each state of the Game of Life Cellular Automaton (GoL). We show that at the macroscopic level described by these time series, complexity properties of GoL are also presented and the following emergent properties, typical of data extracted complex systems such as financial or economical come out: variations of the generated time series following an asymptotic power law distribution, large fluctuations tending to be followed by large fluctuations, and small fluctuations tending to be followed by small ones, and fast decay of linear correlations, however, the correlations associated to their absolute variations exhibit a long range memory. Finally, a Detrended Fluctuation Analysis (DFA) of the generated time series, indicates that the GoL spatial macro states described by the time series are not either completely ordered or random, in a measurable and very interesting way.

  10. Validation of the Simple Shoulder Test in a Portuguese-Brazilian population. Is the latent variable structure and validation of the Simple Shoulder Test Stable across cultures?

    PubMed

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Factor analysis demonstrated a three factor solution. Cronbach's alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples.

  11. Validation of the Simple Shoulder Test in a Portuguese-Brazilian Population. Is the Latent Variable Structure and Validation of the Simple Shoulder Test Stable across Cultures?

    PubMed Central

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436

  12. Self-Assembled Magnetic Surface Swimmers: Theoretical Model

    NASA Astrophysics Data System (ADS)

    Aranson, Igor; Belkin, Maxim; Snezhko, Alexey

    2009-03-01

    The mechanisms of self-propulsion of living microorganisms are a fascinating phenomenon attracting enormous attention in the physics community. A new type of self-assembled micro-swimmers, magnetic snakes, is an excellent tool to model locomotion in a simple table-top experiment. The snakes self-assemble from a dispersion of magnetic microparticles suspended on the liquid-air interface and subjected to an alternating magnetic field. Formation and dynamics of these swimmers are captured in the framework of theoretical model coupling paradigm equation for the amplitude of surface waves, conservation law for the density of particles, and the Navier-Stokes equation for hydrodynamic flows. The results of continuum modeling are supported by hybrid molecular dynamics simulations of magnetic particles floating on the surface of fluid.

  13. Cognitive Constraints on the Simple View of Reading: A Longitudinal Study in Children with Intellectual Disabilities

    ERIC Educational Resources Information Center

    van Wingerden, Evelien; Segers, Eliane; van Balkom, Hans; Verhoeven, Ludo

    2018-01-01

    The present article aimed to explore how the development of reading comprehension is affected when its cognitive basis is compromised. The simple view of reading was adopted as the theoretical framework. The study followed 76 children with mild intellectual disabilities (average IQ = 60.38, age 121 months) across a period of 3 years. The children…

  14. Direct Simple Shear Test Data Analysis using Jupyter Notebooks on DesignSafe-CI

    NASA Astrophysics Data System (ADS)

    Eslami, M.; Esteva, M.; Brandenberg, S. J.

    2017-12-01

    Due to the large number of files and their complex structure, managing data generated during natural hazards experiments requires scalable and specialized tools. DesignSafe-CI (https://www.designsafe-ci.org/) is a web-based research platform that provides computational tools to analyze, curate, and publish critical data for natural hazards research making it understandable and reusable. We present a use case from a series of Direct Simple Shear (DSS) experiments in which we used DS-CI to post-process, visualize, publish, and enable further analysis of the data. Current practice in geotechnical design against earthquakes relies on the soil's plasticity index (PI) to assess liquefaction susceptibility, and cyclic softening triggering procedures, although, quite divergent recommendations on recommended levels of plasticity can be found in the literature for these purposes. A series of cyclic and monotonic direct simple shear experiments was conducted on three low-plasticity fine-grained mixtures at the same plasticity index to examine the effectiveness of the PI in characterization of these types of materials. Results revealed that plasticity index is an insufficient indicator of the cyclic behavior of low-plasticity fine-grained soils, and corrections for pore fluid chemistry and clay minerology may be necessary for future liquefaction susceptibility and cyclic softening assessment procedures. Each monotonic, or cyclic experiment contains two stages; consolidation and shear, which include time series of load, displacement, and corresponding stresses and strains, as well as equivalent excess pore-water pressure. Using the DS-CI curation pipeline we categorized the data to display and describe the experiment's structure and files corresponding to each stage of the experiments. Two separate notebooks in Python 3 were created using the Jupyter application available in DS-CI. A data plotter aids visualizing the experimental data in relation to the sensor from which it was

  15. Theoretical Analysis of the Mechanism of Fracture Network Propagation with Stimulated Reservoir Volume (SRV) Fracturing in Tight Oil Reservoirs.

    PubMed

    Su, Yuliang; Ren, Long; Meng, Fankun; Xu, Chen; Wang, Wendong

    2015-01-01

    Stimulated reservoir volume (SRV) fracturing in tight oil reservoirs often induces complex fracture-network growth, which has a fundamentally different formation mechanism from traditional planar bi-winged fracturing. To reveal the mechanism of fracture network propagation, this paper employs a modified displacement discontinuity method (DDM), mechanical mechanism analysis and initiation and propagation criteria for the theoretical model of fracture network propagation and its derivation. A reasonable solution of the theoretical model for a tight oil reservoir is obtained and verified by a numerical discrete method. Through theoretical calculation and computer programming, the variation rules of formation stress fields, hydraulic fracture propagation patterns (FPP) and branch fracture propagation angles and pressures are analyzed. The results show that during the process of fracture propagation, the initial orientation of the principal stress deflects, and the stress fields at the fracture tips change dramatically in the region surrounding the fracture. Whether the ideal fracture network can be produced depends on the geological conditions and on the engineering treatments. This study has both theoretical significance and practical application value by contributing to a better understanding of fracture network propagation mechanisms in unconventional oil/gas reservoirs and to the improvement of the science and design efficiency of reservoir fracturing.

  16. Theoretical Analysis of the Mechanism of Fracture Network Propagation with Stimulated Reservoir Volume (SRV) Fracturing in Tight Oil Reservoirs

    PubMed Central

    Su, Yuliang; Ren, Long; Meng, Fankun; Xu, Chen; Wang, Wendong

    2015-01-01

    Stimulated reservoir volume (SRV) fracturing in tight oil reservoirs often induces complex fracture-network growth, which has a fundamentally different formation mechanism from traditional planar bi-winged fracturing. To reveal the mechanism of fracture network propagation, this paper employs a modified displacement discontinuity method (DDM), mechanical mechanism analysis and initiation and propagation criteria for the theoretical model of fracture network propagation and its derivation. A reasonable solution of the theoretical model for a tight oil reservoir is obtained and verified by a numerical discrete method. Through theoretical calculation and computer programming, the variation rules of formation stress fields, hydraulic fracture propagation patterns (FPP) and branch fracture propagation angles and pressures are analyzed. The results show that during the process of fracture propagation, the initial orientation of the principal stress deflects, and the stress fields at the fracture tips change dramatically in the region surrounding the fracture. Whether the ideal fracture network can be produced depends on the geological conditions and on the engineering treatments. This study has both theoretical significance and practical application value by contributing to a better understanding of fracture network propagation mechanisms in unconventional oil/gas reservoirs and to the improvement of the science and design efficiency of reservoir fracturing. PMID:25966285

  17. Layered Composite Analysis Capability

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Cole, J. G.

    1985-01-01

    Laminated composite material construction is gaining popularity within industry as an attractive alternative to metallic designs where high strength at reduced weights is of prime consideration. This has necessitated the development of an effective analysis capability for the static, dynamic and buckling analyses of structural components constructed of layered composites. Theoretical and user aspects of layered composite analysis and its incorporation into CSA/NASTRAN are discussed. The availability of stress and strain based failure criteria is described which aids the user in reviewing the voluminous output normally produced in such analyses. Simple strategies to obtain minimum weight designs of composite structures are discussed. Several example problems are presented to demonstrate the accuracy and user convenient features of the capability.

  18. Theoretical analysis for the specific heat and thermal parameters of solid C60

    NASA Astrophysics Data System (ADS)

    Soto, J. R.; Calles, A.; Castro, J. J.

    1997-08-01

    We present the results of a theoretical analysis for the thermal parameters and phonon contribution to the specific heat in solid C60. The phonon contribution to the specific heat is calculated through the solution of the corresponding dynamical matrix, for different points in the Brillouin zone, and the construccion of the partial and generalized phonon density of states. The force constants are obtained from a first principle calculation, using a SCF Hartree-Fock wave function from the Gaussian 92 program. The thermal parameters reported are the effective temperatures and vibrational amplitudes as a function of temperature. Using this model we present a parametization scheme in order to reproduce the general behaviour of the experimental specific heat for these materials.

  19. Five ways of being "theoretical": applications to provider-patient communication research.

    PubMed

    Hall, Judith A; Schmid Mast, Marianne

    2009-03-01

    Analyzes the term "theoretical" as it applies to the area of provider-patient communication research, in order to understand better at a conceptual level what the term may mean for authors and critics. Based on literature on provider-patient communication. Offers, and discusses, five definitions of the term "theoretical" as it applies to empirical research and its exposition: (1) grounding, (2) referencing, (3) design and analysis, (4) interpretation, and (5) impact. Each of these definitions embodies a different standard for evaluating the theoretical aspects of research. Although it is often said that research on provider-patient communication is not "theoretical" enough, the term is ambiguous and often applied vaguely. A multidimensional analysis reveals that there are several distinct ways in which empirical research can be strong or weak theoretically. Researchers, educators, editors, and reviewers could use the "Five Ways" framework to appraise the theory-relevant strengths and weaknesses of empirical research and its exposition.

  20. Theoretical predictor for candidate structure assignment from IMS data of biomolecule-related conformational space.

    PubMed

    Schenk, Emily R; Nau, Frederic; Fernandez-Lima, Francisco

    2015-06-01

    The ability to correlate experimental ion mobility data with candidate structures from theoretical modeling provides a powerful analytical and structural tool for the characterization of biomolecules. In the present paper, a theoretical workflow is described to generate and assign candidate structures for experimental trapped ion mobility and H/D exchange (HDX-TIMS-MS) data following molecular dynamics simulations and statistical filtering. The applicability of the theoretical predictor is illustrated for a peptide and protein example with multiple conformations and kinetic intermediates. The described methodology yields a low computational cost and a simple workflow by incorporating statistical filtering and molecular dynamics simulations. The workflow can be adapted to different IMS scenarios and CCS calculators for a more accurate description of the IMS experimental conditions. For the case of the HDX-TIMS-MS experiments, molecular dynamics in the "TIMS box" accounts for a better sampling of the molecular intermediates and local energy minima.

  1. Simple Kidney Cysts

    MedlinePlus

    ... Solitary Kidney Your Kidneys & How They Work Simple Kidney Cysts What are simple kidney cysts? Simple kidney cysts are abnormal, fluid-filled ... that form in the kidneys. What are the kidneys and what do they do? The kidneys are ...

  2. Experimental temperature analysis of simple & hybrid earth air tunnel heat exchanger in series connection at Bikaner Rajasthan India

    NASA Astrophysics Data System (ADS)

    Jakhar, O. P.; Sharma, Chandra Shekhar; Kukana, Rajendra

    2018-05-01

    The Earth Air Tunnel Heat Exchanger System is a passive air-conditioning system which has no side effect on earth climate and produces better cooling effect and heating effect comfortable to human body. It produces heating effect in winter and cooling effect in summer with the minimum power consumption of energy as compare to other air-conditioning devices. In this research paper Temperature Analysis was done on the two systems of Earth Air Tunnel Heat Exchanger experimentally for summer cooling purpose. Both the system was installed at Mechanical Engineering Department Government Engineering College Bikaner Rajasthan India. Experimental results concludes that the Average Air Temperature Difference was found as 11.00° C and 16.27° C for the Simple and Hybrid Earth Air Tunnel Heat Exchanger in Series Connection System respectively. The Maximum Air Temperature Difference was found as 18.10° C and 23.70° C for the Simple and Hybrid Earth Air Tunnel Heat Exchanger in Series Connection System respectively. The Minimum Air Temperature Difference was found as 5.20° C and 11.70° C for the Simple and Hybrid Earth Air Tunnel Heat Exchanger in Series Connection System respectively.

  3. Experimental determination of gap flow-conditioned forces at turbine stages and their effect on the running stability of simple rotors. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wohlrab, R.

    1983-01-01

    Instabilities in turbine operation can be caused by forces which are produced in connection with motions involving the oil film in the bearings. An experimental investigation regarding the characteristics of such forces in the case of three typical steam turbine stages is conducted, taking into account the effect of various parameters. Supplementary kinetic tests are carried out to obtain an estimate of the flow forces which are proportional to the velocity. The measurements are based on the theoretical study of the damping characteristics of a vibrational model. A computational analysis of the effect of the measured fluid forces on the stability characteristics of simple rotor model is also conducted.

  4. A simple model for cell type recognition using 2D-correlation analysis of FTIR images from breast cancer tissue

    NASA Astrophysics Data System (ADS)

    Ali, Mohamed H.; Rakib, Fazle; Al-Saad, Khalid; Al-Saady, Rafif; Lyng, Fiona M.; Goormaghtigh, Erik

    2018-07-01

    Breast cancer is the second most common cancer after lung cancer. So far, in clinical practice, most cancer parameters originating from histopathology rely on the visualization by a pathologist of microscopic structures observed in stained tissue sections, including immunohistochemistry markers. Fourier transform infrared spectroscopy (FTIR) spectroscopy provides a biochemical fingerprint of a biopsy sample and, together with advanced data analysis techniques, can accurately classify cell types. Yet, one of the challenges when dealing with FTIR imaging is the slow recording of the data. One cm2 tissue section requires several hours of image recording. We show in the present paper that 2D covariance analysis singles out only a few wavenumbers where both variance and covariance are large. Simple models could be built using 4 wavenumbers to identify the 4 main cell types present in breast cancer tissue sections. Decision trees provide particularly simple models to reach discrimination between the 4 cell types. The robustness of these simple decision-tree models were challenged with FTIR spectral data obtained using different recording conditions. One test set was recorded by transflection on tissue sections in the presence of paraffin while the training set was obtained on dewaxed tissue sections by transmission. Furthermore, the test set was collected with a different brand of FTIR microscope and a different pixel size. Despite the different recording conditions, separating extracellular matrix (ECM) from carcinoma spectra was 100% successful, underlying the robustness of this univariate model and the utility of covariance analysis for revealing efficient wavenumbers. We suggest that 2D covariance maps using the full spectral range could be most useful to select the interesting wavenumbers and achieve very fast data acquisition on quantum cascade laser infrared imaging microscopes.

  5. Simple equations to simulate closed-loop recycling liquid-liquid chromatography: Ideal and non-ideal recycling models.

    PubMed

    Kostanyan, Artak E

    2015-12-04

    The ideal (the column outlet is directly connected to the column inlet) and non-ideal (includes the effects of extra-column dispersion) recycling equilibrium-cell models are used to simulate closed-loop recycling counter-current chromatography (CLR CCC). Simple chromatogram equations for the individual cycles and equations describing the transport and broadening of single peaks and complex chromatograms inside the recycling closed-loop column for ideal and non-ideal recycling models are presented. The extra-column dispersion is included in the theoretical analysis, by replacing the recycling system (connecting lines, pump and valving) by a cascade of Nec perfectly mixed cells. To evaluate extra-column contribution to band broadening, two limiting regimes of recycling are analyzed: plug-flow, Nec→∞, and maximum extra-column dispersion, Nec=1. Comparative analysis of ideal and non-ideal models has shown that when the volume of the recycling system is less than one percent of the column volume, the influence of the extra-column processes on the CLR CCC separation may be neglected. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Theoretical and experimental characterization of the DUal-BAse transistor (DUBAT)

    NASA Astrophysics Data System (ADS)

    Wu, Chung-Yu; Wu, Ching-Yuan

    1980-11-01

    A new A-type integrated voltage controlled differential negative resistance device using an extra effective base region to form a lateral pnp (npn) bipolar transistor beside the original base region of a vertical npn (pnp) bipolar junction transistor, and so called the DUal BAse Transistor (DUBAT), is studied both experimentally and theoretically, The DUBAT has three terminals and is fully comparible with the existing bipolar integrated circuits technologies. Based upon the equivalent circuit of the DUBAT, a simple first-order analytical theory is developed, and important device parameters, such as: the I-V characteristic, the differential negative resistance, and the peak and valley points, are also characterized. One of the proposed integrated structures of the DUBAT, which is similar in structure to I 2L but with similar high density and a normally operated vertical npn transistor, has been successfully fabricated and studied. Comparisons between the experimental data and theoretical analyses are made, and show in satisfactory agreements.

  7. A theoretical analysis of the effect of thrust-related turbulence distortion on helicopter rotor low-frequency broadband noise

    NASA Technical Reports Server (NTRS)

    Williams, M.; Harris, W. L.

    1984-01-01

    The purpose of the analysis is to determine if inflow turbulence distortion may be a cause of experimentally observed changes in sound pressure levels when the rotor mean loading is varied. The effect of helicopter rotor mean aerodynamics on inflow turbulence is studied within the framework of the turbulence rapid distortion theory developed by Pearson (1959) and Deissler (1961). The distorted inflow turbulence is related to the resultant noise by conventional broadband noise theory. A comparison of the distortion model with experimental data shows that the theoretical model is unable to totally explain observed increases in model rotor sound pressures with increased rotor mean thrust. Comparison of full scale rotor data with the theoretical model shows that a shear-type distortion may explain decreasing sound pressure levels with increasing thrust.

  8. relaxGUI: a new software for fast and simple NMR relaxation data analysis and calculation of ps-ns and μs motion of proteins.

    PubMed

    Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R

    2011-06-01

    Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.

  9. Genetic diversity and population structure analysis in Perilla frutescens from Northern areas of China based on simple sequence repeats.

    PubMed

    Ma, S J; Sa, K J; Hong, T K; Lee, J K

    2017-09-21

    In this study, 21 simple sequence repeat (SSR) markers were used to evaluate the genetic diversity and population structure among 77 Perilla accessions from high-latitude and middle-latitude areas of China. Ninety-five alleles were identified with an average of 4.52 alleles per locus. The average polymorphic information content (PIC) and genetic diversity values were 0.346 and 0.372, respectively. The level of genetic diversity and PIC value for cultivated accessions of Perilla frutescens var. frutescens from middle-latitude areas were higher than accessions from high-latitude areas. Based on the dendrogram of unweighted pair group method with arithmetic mean (UPGMA), all accessions were classified into four major groups with a genetic similarity of 46%. All accessions of the cultivated var. frutescens were discriminated from the cultivated P. frutescens var. crispa. Furthermore, most accessions of the cultivated var. frutescens collected in high-latitude and middle-latitude areas were distinguished depending on their geographical location. However, the geographical locations of several accessions of the cultivated var. frutescens have no relation with their positions in the UPGMA dendrogram and population structure. This result implies that the diffusion of accessions of the cultivated Perilla crop in the northern areas of China might be through multiple routes. On the population structure analysis, 77 Perilla accessions were divided into Group I, Group II, and an admixed group based on a membership probability threshold of 0.8. Finally, the findings in this study can provide useful theoretical knowledge for further study on the population structure and genetic diversity of Perilla and benefit for Perilla crop breeding and germplasm conservation.

  10. Optical activity and electronic absorption spectra of some simple nucleosides related to cytidine and uridine: all-valence-shell molecular orbital calculations.

    PubMed Central

    Miles, D W; Redington, P K; Miles, D L; Eyring, H

    1981-01-01

    The circular dichroism and electronic absorption of three simple model systems for cytidine and uridine have been measured to 190 nm. The molecular spectral properties (excitation wavelengths, oscillator strengths, rotational strengths, and polarization directions) and electronic transitional patterns were investigated by using wave functions of the entire nucleoside with the goal of establishing the reliability of the theoretical method. The computed electronic absorption quantities were shown to be in satisfactory agreement with experimental data. It was found that the computed optical rotatory strengths of the B2u and E1u electronic transitions and lowest observed n-pi transition are in good agreement with experimental values. Electronic transitions were characterized by their electronic transitional patterns derived from population analysis of the transition density matrix. The theoretical rotational strengths associated with the B2u and E1u transitions stabilize after the use of just a few singly excited configurations in the configuration interaction basis and, hypothetically, are more reliable as indicators of conformation in pyrimidine nucleosides related to cytidine. PMID:6950393

  11. Simple projects guidebook : federal-aid procedure for simple projects

    DOT National Transportation Integrated Search

    2002-06-01

    Experience has shown that a simple project generally 1) does not have any right-of-way involvement and 2) has a Programmatic Categorical Exclusion or Categorical Exclusion environmental determination. Page 7 outlines the definition of simple projects...

  12. A Thematic Analysis of Theoretical Models for Translational Science in Nursing: Mapping the Field

    PubMed Central

    Mitchell, Sandra A.; Fisher, Cheryl A.; Hastings, Clare E.; Silverman, Leanne B.; Wallen, Gwenyth R.

    2010-01-01

    Background The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. Purpose This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Method Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes; (2) strategic change to promote adoption of new knowledge; (3) knowledge exchange and synthesis for application and inquiry; (4) designing and interpreting dissemination research. Discussion This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. Conclusions A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes. PMID:21074646

  13. A Simple Mechanism for Cooperation in the Well-Mixed Prisoner's Dilemma Game

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2008-11-01

    I show that the addition of Gaussian noise to the payoffs is able to stabilize cooperation in well-mixed populations, where individuals play the prisoner's dilemma game. The impact of stochasticity on the evolutionary dynamics can be expressed deterministically via a simple small-noise expansion of multiplicative noisy terms. In particular, cooperation emerges as a stable noise-induced steady state in the replicator dynamics. Due to the generality of the employed theoretical framework, presented results should prove valuable in various scientific disciplines, ranging from economy to ecology.

  14. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers

    PubMed Central

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-01

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686

  15. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers.

    PubMed

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-11

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.

  16. One-dimensional barcode reading: an information theoretic approach

    NASA Astrophysics Data System (ADS)

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-01

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  17. One-dimensional barcode reading: an information theoretic approach.

    PubMed

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-10

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  18. Some Key Issues in Creating Inquiry-Based Instructional Practices that Aim at the Understanding of Simple Electric Circuits

    ERIC Educational Resources Information Center

    Kock, Zeger-Jan; Taconis, Ruurd; Bolhuis, Sanneke; Gravemeijer, Koeno

    2013-01-01

    Many students in secondary schools consider the sciences difficult and unattractive. This applies to physics in particular, a subject in which students attempt to learn and understand numerous theoretical concepts, often without much success. A case in point is the understanding of the concepts current, voltage and resistance in simple electric…

  19. Staying theoretically sensitive when conducting grounded theory research.

    PubMed

    Reay, Gudrun; Bouchal, Shelley Raffin; A Rankin, James

    2016-09-01

    Background Grounded theory (GT) is founded on the premise that underlying social patterns can be discovered and conceptualised into theories. The method and need for theoretical sensitivity are best understood in the historical context in which GT was developed. Theoretical sensitivity entails entering the field with no preconceptions, so as to remain open to the data and the emerging theory. Investigators also read literature from other fields to understand various ways to construct theories. Aim To explore the concept of theoretical sensitivity from a classical GT perspective, and discuss the ontological and epistemological foundations of GT. Discussion Difficulties in remaining theoretically sensitive throughout research are discussed and illustrated with examples. Emergence - the idea that theory and substance will emerge from the process of comparing data - and staying open to the data are emphasised. Conclusion Understanding theoretical sensitivity as an underlying guiding principle of GT helps the researcher make sense of important concepts, such as delaying the literature review, emergence and the constant comparative method (simultaneous collection, coding and analysis of data). Implications for practice Theoretical sensitivity and adherence to the GT research method allow researchers to discover theories that can bridge the gap between theory and practice.

  20. Theoretical and experimental studies of reentry plasmas

    NASA Technical Reports Server (NTRS)

    Dunn, M. G.; Kang, S.

    1973-01-01

    A viscous shock-layer analysis was developed and used to calculate nonequilibrium-flow species distributions in the plasma layer of the RAM vehicle. The theoretical electron-density results obtained are in good agreement with those measured in flight. A circular-aperture flush-mounted antenna was used to obtain a comparison between theoretical and experimental antenna admittance in the presence of ionized boundary layers of low collision frequency. The electron-temperature and electron-density distributions in the boundary layer were independently measured. The antenna admittance was measured using a four-probe microwave reflectometer and these measured values were found to be in good agreement with those predicted. Measurements were also performed with another type of circular-aperture antenna and good agreement was obtained between the calculations and the experimental results. A theoretical analysis has been completed which permits calculation of the nonequilibrium, viscous shock-layer flow field for a sphere-cone body. Results are presented for two different bodies at several different altitudes illustrating the influences of bluntness and chemical nonequilibrium on several gas dynamic parameters of interest. Plane-wave transmission coefficients were calculated for an approximate space-shuttle body using a typical trajectory.

  1. Simple versus complex degenerative mitral valve disease.

    PubMed

    Javadikasgari, Hoda; Mihaljevic, Tomislav; Suri, Rakesh M; Svensson, Lars G; Navia, Jose L; Wang, Robert Z; Tappuni, Bassman; Lowry, Ashley M; McCurry, Kenneth R; Blackstone, Eugene H; Desai, Milind Y; Mick, Stephanie L; Gillinov, A Marc

    2018-07-01

    At a center where surgeons favor mitral valve (MV) repair for all subsets of leaflet prolapse, we compared results of patients undergoing repair for simple versus complex degenerative MV disease. From January 1985 to January 2016, 6153 patients underwent primary isolated MV repair for degenerative disease, 3101 patients underwent primary isolated MV repair for simple disease (posterior prolapse), and 3052 patients underwent primary isolated MV repair for complex disease (anterior or bileaflet prolapse), based on preoperative echocardiographic images. Logistic regression analysis was used to generate propensity scores for risk-adjusted comparisons (n = 2065 matched pairs). Durability was assessed by longitudinal recurrence of mitral regurgitation and reoperation. Compared with patients with simple disease, those undergoing repair of complex pathology were more likely to be younger and female (both P values < .0001) but with similar symptoms (P = .3). The most common repair technique was ring/band annuloplasty (3055/99% simple vs 3000/98% complex; P = .5), followed by leaflet resection (2802/90% simple vs 2249/74% complex; P < .0001). Among propensity-matched patients, recurrence of severe mitral regurgitation 10 years after repair was 6.2% for simple pathology versus 11% for complex pathology (P = .007), reoperation at 18 years was 6.3% for simple pathology versus 11% for complex pathology, and 20-year survival was 62% for simple pathology versus 61% for complex pathology (P = .6). Early surgical intervention has become more common in patients with degenerative MV disease, regardless of valve prolapse complexity or symptom status. Valve repair was associated with similarly low operative risk and time-related survival but less durability in complex disease. Lifelong annual echocardiographic surveillance after MV repair is recommended, particularly in patients with complex disease. Copyright © 2018 The American Association for Thoracic Surgery

  2. Simple inflationary quintessential model. II. Power law potentials

    NASA Astrophysics Data System (ADS)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive

  3. Simple, specific analysis of organophosphorus and carbamate pesticides in sediments using column extraction and gas chromatography

    USGS Publications Warehouse

    Belisle, A.A.; Swineford, D.M.

    1988-01-01

    A simple, specific procedure was developed for the analysis of organophosphorus and carbamate pesticides in sediment. The wet soil was mixed with anhydrous sodium sulfate to bind water and the residues were column extracted in acetone:methylene chloride (1:l,v/v). Coextracted water was removed by additional sodium sulfate packed below the sample mixture. The eluate was concentrated and analyzed directly by capillary gas chromatography using phosphorus and nitrogen specific detectors. Recoveries averaged 93 % for sediments extracted shortly after spiking, but decreased significantly as the samples aged.

  4. Integrated Education in Conflicted Societies: Is There a Need for New Theoretical Language?

    ERIC Educational Resources Information Center

    Zembylas, Michalinos; Bekerman, Zvi

    2013-01-01

    This article takes on the issue of "integrated education" in conflicted societies and engages in a deeper analysis of its dominant theoretical concepts, approaches, and implications. This analysis suggests that the theoretical language that drives current approaches of integrated education may unintentionally be complicit to the project…

  5. Biology is more theoretical than physics.

    PubMed

    Gunawardena, Jeremy

    2013-06-01

    The word "theory" is used in at least two senses--to denote a body of widely accepted laws or principles, as in "Darwinian theory" or "quantum theory," and to suggest a speculative hypothesis, often relying on mathematical analysis, that has not been experimentally confirmed. It is often said that there is no place for the second kind of theory in biology and that biology is not theoretical but based on interpretation of data. Here, ideas from a previous essay are expanded upon to suggest, to the contrary, that the second kind of theory has always played a critical role and that biology, therefore, is a good deal more theoretical than physics.

  6. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  7. The Dynamics of Germinal Centre Selection as Measured by Graph-Theoretical Analysis of Mutational Lineage Trees

    PubMed Central

    Dunn-Walters, Deborah K.; Belelovsky, Alex; Edelman, Hanna; Banerjee, Monica; Mehr, Ramit

    2002-01-01

    We have developed a rigorous graph-theoretical algorithm for quantifying the shape properties of mutational lineage trees. We show that information about the dynamics of hypermutation and antigen-driven clonal selection during the humoral immune response is contained in the shape of mutational lineage trees deduced from the responding clones. Age and tissue related differences in the selection process can be studied using this method. Thus, tree shape analysis can be used as a means of elucidating humoral immune response dynamics in various situations. PMID:15144020

  8. Structural modeling and analysis of an effluent treatment process for electroplating--a graph theoretic approach.

    PubMed

    Kumar, Abhishek; Clement, Shibu; Agrawal, V P

    2010-07-15

    An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.

  9. Au133(SPh-tBu)52 nanomolecules: X-ray crystallography, optical, electrochemical, and theoretical analysis.

    PubMed

    Dass, Amala; Theivendran, Shevanuja; Nimmala, Praneeth Reddy; Kumara, Chanaka; Jupally, Vijay Reddy; Fortunelli, Alessandro; Sementa, Luca; Barcaro, Giovanni; Zuo, Xiaobing; Noll, Bruce C

    2015-04-15

    Crystal structure determination has revolutionized modern science in biology, chemistry, and physics. However, the difficulty in obtaining periodic crystal lattices which are needed for X-ray crystal analysis has hindered the determination of atomic structure in nanomaterials, known as the "nanostructure problem". Here, by using rigid and bulky ligands, we have overcome this limitation and successfully solved the X-ray crystallographic structure of the largest reported thiolated gold nanomolecule, Au133S52. The total composition, Au133(SPh-tBu)52, was verified using high resolution electrospray ionization mass spectrometry (ESI-MS). The experimental and simulated optical spectra show an emergent surface plasmon resonance that is more pronounced than in the slightly larger Au144(SCH2CH2Ph)60. Theoretical analysis indicates that the presence of rigid and bulky ligands is the key to the successful crystal formation.

  10. Stability analysis of BWR nuclear-coupled thermal-hyraulics using a simple model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karve, A.A.; Rizwan-uddin; Dorning, J.J.

    1995-09-01

    A simple mathematical model is developed to describe the dynamics of the nuclear-coupled thermal-hydraulics in a boiling water reactor (BWR) core. The model, which incorporates the essential features of neutron kinetics, and single-phase and two-phase thermal-hydraulics, leads to simple dynamical system comprised of a set of nonlinear ordinary differential equations (ODEs). The stability boundary is determined and plotted in the inlet-subcooling-number (enthalpy)/external-reactivity operating parameter plane. The eigenvalues of the Jacobian matrix of the dynamical system also are calculated at various steady-states (fixed points); the results are consistent with those of the direct stability analysis and indicate that a Hopf bifurcationmore » occurs as the stability boundary in the operating parameter plane is crossed. Numerical simulations of the time-dependent, nonlinear ODEs are carried out for selected points in the operating parameter plane to obtain the actual damped and growing oscillations in the neutron number density, the channel inlet flow velocity, and the other phase variables. These indicate that the Hopf bifurcation is subcritical, hence, density wave oscillations with growing amplitude could result from a finite perturbation of the system even where the steady-state is stable. The power-flow map, frequently used by reactor operators during start-up and shut-down operation of a BWR, is mapped to the inlet-subcooling-number/neutron-density (operating-parameter/phase-variable) plane, and then related to the stability boundaries for different fixed inlet velocities corresponding to selected points on the flow-control line. The stability boundaries for different fixed inlet subcooling numbers corresponding to those selected points, are plotted in the neutron-density/inlet-velocity phase variable plane and then the points on the flow-control line are related to their respective stability boundaries in this plane.« less

  11. Study on the influence of X-ray tube spectral distribution on the analysis of bulk samples and thin films: Fundamental parameters method and theoretical coefficient algorithms

    NASA Astrophysics Data System (ADS)

    Sitko, Rafał

    2008-11-01

    Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272).

  12. SIMPLE: An Introduction.

    ERIC Educational Resources Information Center

    Endres, Frank L.

    Symbolic Interactive Matrix Processing Language (SIMPLE) is a conversational matrix-oriented source language suited to a batch or a time-sharing environment. The two modes of operation of SIMPLE are conversational mode and programing mode. This program uses a TAURUS time-sharing system and cathode ray terminals or teletypes. SIMPLE performs all…

  13. Theoretical analysis for the design of the French watt balance experiment force comparator

    NASA Astrophysics Data System (ADS)

    Pinot, Patrick; Genevès, Gerard; Haddad, Darine; David, Jean; Juncar, Patrick; Lecollinet, Michel; Macé, Stéphane; Villar, François

    2007-09-01

    This paper presents a preliminary analysis for designing a force comparator to be used in the French watt balance experiment. The first stage of this experiment consists in a static equilibrium, by means of a mechanical beam balance, between a gravitational force (a weight of an artefact having a known mass submitted to the acceleration due to the gravity) and a vertical electromagnetic force acting on a coil driven by a current subject to the magnetic induction field provided by a permanent magnet. The principle of the force comparison in the French experiment is explained. The general design configuration of the force balance using flexure strips as pivots is discussed and theoretical calculation results based on realistic assumptions of the static and dynamic behaviors of the balance are presented.

  14. Theoretical analysis for the design of the French watt balance experiment force comparator.

    PubMed

    Pinot, Patrick; Genevès, Gerard; Haddad, Darine; David, Jean; Juncar, Patrick; Lecollinet, Michel; Macé, Stéphane; Villar, François

    2007-09-01

    This paper presents a preliminary analysis for designing a force comparator to be used in the French watt balance experiment. The first stage of this experiment consists in a static equilibrium, by means of a mechanical beam balance, between a gravitational force (a weight of an artefact having a known mass submitted to the acceleration due to the gravity) and a vertical electromagnetic force acting on a coil driven by a current subject to the magnetic induction field provided by a permanent magnet. The principle of the force comparison in the French experiment is explained. The general design configuration of the force balance using flexure strips as pivots is discussed and theoretical calculation results based on realistic assumptions of the static and dynamic behaviors of the balance are presented.

  15. Game Theoretic Resolution of Water Conflicts

    NASA Astrophysics Data System (ADS)

    Tyagi, H.; Gosain, A. K.; Khosa, R.

    2017-12-01

    Water disputes are of multi-disciplinary nature and involve an array of natural, hydrological,social, political and economic issues. Operations Research based decision making methodshave been found to facilitate mathematical analysis of such multifaceted problems thatconsist of multiple stakeholders and their conflicting objectives. Game Theoretic techniqueslike Metagame and Hypergame Analysis can provide a framework for conceptualizing waterconflicts and envisaging their potential solutions. In the present research, firstly a Metagamemodel has been developed to identify range of plausible equilibrium outcomes for resolvingconflicts pertaining to water apportionments in a transboundary watercourse. Further, it hasbeen observed that the contenders often hide their strategies from other players to getfavorable water allocations. Consequently, there are widespread misinterpretations about thetactics of the competitors and contenders have to formulate their strategies entirely based ontheir perception about others. Accordingly, a Hypergame study has also been conducted tomodel the probable misperceptions that may exist amongst the river riparians. Thus, thecurrent study assesses the efficacy of Game Theoretic techniques as possible redressalmechanism for water conflicts.

  16. Theoretical modeling and experimental analysis of solar still integrated with evacuated tubes

    NASA Astrophysics Data System (ADS)

    Panchal, Hitesh; Awasthi, Anuradha

    2017-06-01

    In this present research work, theoretical modeling of single slope, single basin solar still integrated with evacuated tubes has been performed based on energy balance equations. Major variables like water temperature, inner glass cover temperature and distillate output has been computed based on theoretical modeling. The experimental setup has been made from locally available materials and installed at Gujarat Power Engineering and Research Institute, Mehsana, Gujarat, India (23.5880°N, 72.3693°E) with 0.04 m depth during 6 months of time interval. From the series of experiments, it is found considerable increment in average distillate output of a solar still when integrated with evacuated tubes not only during daytime but also from night time. In all experimental cases, the correlation of coefficient (r) and root mean square percentage deviation of theoretical modeling and experimental study found good agreement with 0.97 < r < 0.98 and 10.22 < e < 38.4% respectively.

  17. Expanding Panjabi's stability model to express movement: a theoretical model.

    PubMed

    Hoffman, J; Gabel, P

    2013-06-01

    on human movement. The use of this model may provide a universal system for body movement analysis and understanding musculoskeletal disorders. In turn, this may lead to a simple categorisation system alluding to the functional face-value of a wide range of commonly used passive, active or combined musculoskeletal interventions. Further research is required to investigate the mechanisms that enable or interfere with harmonious body movements. Such work may then potentially lead to new and evolved evidence based interventions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Spacecraft-charging mitigation of a high-power electron beam emitted by a magnetospheric spacecraft: Simple theoretical model for the transient of the spacecraft potential

    DOE PAGES

    Castello, Federico Lucco; Delzanno, Gian Luca; Borovsky, Joseph E.; ...

    2018-05-29

    A spacecraft-charging mitigation scheme necessary for the operation of a high-power electron beam in the low-density magnetosphere is analyzed. The scheme is based on a plasma contactor, i.e. a high-density charge-neutral plasma emitted prior to and during beam emission, and its ability to emit high ion currents without strong space-charge limitations. A simple theoretical model for the transient of the spacecraft potential and contactor expansion during beam emission is presented. The model focuses on the contactor ion dynamics and is valid in the limit when the ion contactor current is equal to the beam current. The model is found inmore » very good agreement with Particle-In-Cell simulations over a large parametric study that varies the initial expansion time of the contactor, the contactor current and the ion mass. The model highlights the physics of the spacecraft-charging mitigation scheme, indicating that the most important part of the dynamics is the evolution of the outermost ion front which is pushed away by the charge accumulated in the system by the beam. The model can be also used to estimate the long-time evolution of the spacecraft potential. For a short contactor expansion (0.3 or 0.6 ms Helium plasma or 0.8 ms Argon plasma, both with 1 mA current) it yields a peak spacecraft potential of the order of 1-3 kV. This implies that a 1-mA relativistic electron beam would be easily emitted by the spacecraft.« less

  19. Spacecraft-charging mitigation of a high-power electron beam emitted by a magnetospheric spacecraft: Simple theoretical model for the transient of the spacecraft potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Federico Lucco; Delzanno, Gian Luca; Borovsky, Joseph E.

    A spacecraft-charging mitigation scheme necessary for the operation of a high-power electron beam in the low-density magnetosphere is analyzed. The scheme is based on a plasma contactor, i.e. a high-density charge-neutral plasma emitted prior to and during beam emission, and its ability to emit high ion currents without strong space-charge limitations. A simple theoretical model for the transient of the spacecraft potential and contactor expansion during beam emission is presented. The model focuses on the contactor ion dynamics and is valid in the limit when the ion contactor current is equal to the beam current. The model is found inmore » very good agreement with Particle-In-Cell simulations over a large parametric study that varies the initial expansion time of the contactor, the contactor current and the ion mass. The model highlights the physics of the spacecraft-charging mitigation scheme, indicating that the most important part of the dynamics is the evolution of the outermost ion front which is pushed away by the charge accumulated in the system by the beam. The model can be also used to estimate the long-time evolution of the spacecraft potential. For a short contactor expansion (0.3 or 0.6 ms Helium plasma or 0.8 ms Argon plasma, both with 1 mA current) it yields a peak spacecraft potential of the order of 1-3 kV. This implies that a 1-mA relativistic electron beam would be easily emitted by the spacecraft.« less

  20. Theoretical and experimental evidence of Fano-like resonances in simple monomode photonic circuits

    NASA Astrophysics Data System (ADS)

    Mouadili, A.; El Boudouti, E. H.; Soltani, A.; Talbi, A.; Akjouj, A.; Djafari-Rouhani, B.

    2013-04-01

    A simple photonic device consisting of two dangling side resonators grafted at two sites on a waveguide is designed in order to obtain sharp resonant states inside the transmission gaps without introducing any defects in the structure. This results from an internal resonance of the structure when such a resonance is situated in the vicinity of a zero of transmission or placed between two zeros of transmission, the so-called Fano resonances. A general analytical expression for the transmission coefficient is given for various systems of this kind. The amplitude of the transmission is obtained following the Fano form. The full width at half maximum of the resonances as well as the asymmetric Fano parameter are discussed explicitly as function of the geometrical parameters of the system. In addition to the usual asymmetric Fano resonance, we show that this system may exhibit an electromagnetic induced transparency resonance as well as well as a particular case where such resonances collapse in the transmission coefficient. Also, we give a comparison between the phase of the determinant of the scattering matrix, the so-called Friedel phase, and the phase of the transmission amplitude. The analytical results are obtained by means of the Green's function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime. These results should have important consequences for designing integrated devices such as narrow-frequency optical or microwave filters and high-speed switches. This system is proposed as a simpler alternative to coupled-micoresonators.

  1. Effect of Profilin on Actin Critical Concentration: A Theoretical Analysis

    PubMed Central

    Yarmola, Elena G.; Dranishnikov, Dmitri A.; Bubb, Michael R.

    2008-01-01

    To explain the effect of profilin on actin critical concentration in a manner consistent with thermodynamic constraints and available experimental data, we built a thermodynamically rigorous model of actin steady-state dynamics in the presence of profilin. We analyzed previously published mechanisms theoretically and experimentally and, based on our analysis, suggest a new explanation for the effect of profilin. It is based on a general principle of indirect energy coupling. The fluctuation-based process of exchange diffusion indirectly couples the energy of ATP hydrolysis to actin polymerization. Profilin modulates this coupling, producing two basic effects. The first is based on the acceleration of exchange diffusion by profilin, which indicates, paradoxically, that a faster rate of actin depolymerization promotes net polymerization. The second is an affinity-based mechanism similar to the one suggested in 1993 by Pantaloni and Carlier although based on indirect rather than direct energy coupling. In the model by Pantaloni and Carlier, transformation of chemical energy of ATP hydrolysis into polymerization energy is regulated by direct association of each step in the hydrolysis reaction with a corresponding step in polymerization. Thus, hydrolysis becomes a time-limiting step in actin polymerization. In contrast, indirect coupling allows ATP hydrolysis to lag behind actin polymerization, consistent with experimental results. PMID:18835900

  2. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  3. Morphologic Analysis of Lunar Craters in the Simple-to-Complex Transition

    NASA Astrophysics Data System (ADS)

    Chandnani, M.; Herrick, R. R.; Kramer, G. Y.

    2015-12-01

    The diameter range of 15 km to 20 km on the Moon is within the transition from simple to complex impact craters. We examined 207 well preserved craters in this diameter range distributed across the moon using high resolution Lunar Reconnaissance Orbiter Camera Wide Angle Camera Mosaic (WAC) and Narrow Angle Camera (NAC) data. A map of the distribution of the 207 craters on the Moon using the global LROC WAC mosaic has been attahced with the abstract. By examining craters of similar diameter, impact energy is nearly constant, so differences in shape and morphology must be due to either target (e.g., porosity, density, coherence, layering) or impactor (e.g., velocity, density) properties. On the basis of the crater morphology, topographic profiles and depth-diameter ratio, the craters were classified into simple, craters with slumped walls, craters with both slumping and terracing, those containing a central uplift only, those with a central uplift and slumping, and the craters with a central uplift accompanied by both slumping and terracing, as shown in the image. It was observed that simple craters and craters with slumped walls occur predominately on the lunar highlands. The majority of the craters with terraced walls and all classes of central uplifts were observed predominately on the mare. In short, in this size range craters in the highlands were generally simple craters with occasionally some slumped material in the center, and the more developed features (terracing, central peak) were associated with mare craters. This is somewhat counterintuitive, as we expect the highlands to be generally weaker and less consolidated than the mare. We hypothesize that the presence of rheologic layering in the mare may be the cause of the more complex features that we observe. Relatively weak layers in the mare could develop through regolith formation between individual flows, or perhaps by variations within or between the flows themselves.

  4. Non-planar vibrations of slightly curved pipes conveying fluid in simple and combination parametric resonances

    NASA Astrophysics Data System (ADS)

    Czerwiński, Andrzej; Łuczko, Jan

    2018-01-01

    The paper summarises the experimental investigations and numerical simulations of non-planar parametric vibrations of a statically deformed pipe. Underpinning the theoretical analysis is a 3D dynamic model of curved pipe. The pipe motion is governed by four non-linear partial differential equations with periodically varying coefficients. The Galerkin method was applied, the shape function being that governing the beam's natural vibrations. Experiments were conducted in the range of simple and combination parametric resonances, evidencing the possibility of in-plane and out-of-plane vibrations as well as fully non-planar vibrations in the combination resonance range. It is demonstrated that sub-harmonic and quasi-periodic vibrations are likely to be excited. The method suggested allows the spatial modes to be determined basing on results registered at selected points in the pipe. Results are summarised in the form of time histories, phase trajectory plots and spectral diagrams. Dedicated video materials give us a better insight into the investigated phenomena.

  5. Design and analysis of simple choice surveys for natural resource management

    USGS Publications Warehouse

    Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.

    2010-01-01

    We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.

  6. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non

  7. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non

  8. A Simple and Accurate Analysis of Conductivity Loss in Millimeter-Wave Helical Slow-Wave Structures

    NASA Astrophysics Data System (ADS)

    Datta, S. K.; Kumar, Lalit; Basu, B. N.

    2009-04-01

    Electromagnetic field analysis of a helix slow-wave structure was carried out and a closed form expression was derived for the inductance per unit length of the transmission-line equivalent circuit of the structure, taking into account the actual helix tape dimensions and surface current on the helix over the actual metallic area of the tape. The expression of the inductance per unit length, thus obtained, was used for estimating the increment in the inductance per unit length caused due to penetration of the magnetic flux into the conducting surfaces following Wheeler’s incremental inductance rule, which was subsequently interpreted for the attenuation constant of the propagating structure. The analysis was computationally simple and accurate, and accrues the accuracy of 3D electromagnetic analysis by allowing the use of dispersion characteristics obtainable from any standard electromagnetic modeling. The approach was benchmarked against measurement for two practical structures, and excellent agreement was observed. The analysis was subsequently applied to demonstrate the effects of conductivity on the attenuation constant of a typical broadband millimeter-wave helical slow-wave structure with respect to helix materials and copper plating on the helix, surface finish of the helix, dielectric loading effect and effect of high temperature operation - a comparative study of various such aspects are covered.

  9. Experimental and Theoretical Analysis of Sound Absorption Properties of Finely Perforated Wooden Panels

    PubMed Central

    Song, Boqi; Peng, Limin; Fu, Feng; Liu, Meihong; Zhang, Houjiang

    2016-01-01

    Perforated wooden panels are typically utilized as a resonant sound absorbing material in indoor noise control. In this paper, the absorption properties of wooden panels perforated with tiny holes of 1–3 mm diameter were studied both experimentally and theoretically. The Maa-MPP (micro perforated panels) model and the Maa-Flex model were applied to predict the absorption regularities of finely perforated wooden panels. A relative impedance comparison and full-factorial experiments were carried out to verify the feasibility of the theoretical models. The results showed that the Maa-Flex model obtained good agreement with measured results. Control experiments and measurements of dynamic mechanical properties were carried out to investigate the influence of the wood characteristics. In this study, absorption properties were enhanced by sound-induced vibration. The relationship between the dynamic mechanical properties and the panel mass-spring vibration absorption was revealed. While the absorption effects of wood porous structure were not found, they were demonstrated theoretically by using acoustic wave propagation in a simplified circular pipe with a suddenly changed cross-section model. This work provides experimental and theoretical guidance for perforation parameter design. PMID:28774063

  10. Experimental and Theoretical Analysis of Sound Absorption Properties of Finely Perforated Wooden Panels.

    PubMed

    Song, Boqi; Peng, Limin; Fu, Feng; Liu, Meihong; Zhang, Houjiang

    2016-11-22

    Perforated wooden panels are typically utilized as a resonant sound absorbing material in indoor noise control. In this paper, the absorption properties of wooden panels perforated with tiny holes of 1-3 mm diameter were studied both experimentally and theoretically. The Maa-MPP (micro perforated panels) model and the Maa-Flex model were applied to predict the absorption regularities of finely perforated wooden panels. A relative impedance comparison and full-factorial experiments were carried out to verify the feasibility of the theoretical models. The results showed that the Maa-Flex model obtained good agreement with measured results. Control experiments and measurements of dynamic mechanical properties were carried out to investigate the influence of the wood characteristics. In this study, absorption properties were enhanced by sound-induced vibration. The relationship between the dynamic mechanical properties and the panel mass-spring vibration absorption was revealed. While the absorption effects of wood porous structure were not found, they were demonstrated theoretically by using acoustic wave propagation in a simplified circular pipe with a suddenly changed cross-section model. This work provides experimental and theoretical guidance for perforation parameter design.

  11. Au133(SPh-tBu)52 Nanomolecules: X-ray Crystallography, Optical, Electrochemical, and Theoretical Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dass, Amala; Theivendran, Shevanuja; Nimmala, Praneeth Reddy

    2015-04-15

    Crystal structure determination has revolutionized modern science in biology, chemistry, and physics. However, the difficulty in obtaining periodic crystal lattices which are needed for X-ray crystal analysis has hindered the determination of atomic structure in nanomaterials, known as the “nanostructure problem”. Here, by using rigid and bulky ligands, we have overcome this limitation and successfully solved the X-ray crystallographic structure of the largest reported thiolated gold nanomolecule, Au133S52. The total composition, Au133(SPh-tBu)52, was verified using high resolution electrospray ionization mass spectrometry (ESI-MS). The experimental and simulated optical spectra show an emergent surface plasmon resonance that is more pronounced than inmore » the slightly larger Au144(SCH2CH2Ph)60. Theoretical analysis indicates that the presence of rigid and bulky ligands is the key to the successful crystal formation.« less

  12. Au 133 (SPh - t Bu) 52 Nanomolecules: X-ray Crystallography, Optical, Electrochemical, and Theoretical Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dass, Amala; Theivendran, Shevanuja; Nimmala, Praneeth Reddy

    2015-04-15

    Crystal structure determination has revolutionized modern science in biology, chemistry, and physics. However, the difficulty in obtaining periodic crystal lattices which are needed for X-ray crystal analysis has hindered the determination of atomic structure in nanomaterials, known as the "nanostructure problem". Here, by using rigid and bulky ligands, we have overcome this limitation and successfully solved the X-ray crystallographic structure of the largest reported thiolated gold nanomolecule, Au133S52. The total composition, Au-133(SPh-tBu)(52), was verified using high resolution electrospray ionization mass spectrometry (ESI-MS). The experimental and simulated optical spectra show an emergent surface plasmon resonance that is more pronounced than inmore » the slightly larger Au-144(SCH2CH2Ph)(60). Theoretical analysis indicates that the presence of rigid and bulky ligands is the key to the successful crystal formation.« less

  13. Structure and thermodynamics of a simple fluid

    NASA Astrophysics Data System (ADS)

    Stell, G.; Weis, J. J.

    1980-02-01

    Monte Carlo results are found for a simple fluid with a pair potential consisting of a hard-sphere core and a Lennard-Jones attractive tail. They are compared with several of the most promising recent theoretical treatments of simple fluids, all of which involve the decomposition of the pair potential into a hard-sphere-core term and an attractive-tail term. This direct comparison avoids the use of a second perturbation scheme associated with softening the core, which would introduce an ambiguity in the significance of the differences found between the theoretical and Monte Carlo results. The study includes the optimized random-phase approximation (ORPA) and exponential (EXP) approximations of Andersen and Chandler, an extension of the latter approximation to nodal order three (the N3 approximation), the linear-plus-square (LIN + SQ) approximation of Høye and Stell, the renormalized hypernetted chain (RHNC) approximation of Lado, and the quadratic (QUAD) approximation suggested by second-order self-consistent Γ ordering, the lowest order of which is identical to the ORPA. As anticipated on the basis of earlier studies, it is found that the EXP approximation yields radial distribution functions and structure factors of excellent overall accuracy in the liquid state, where the RHNC results are also excellent and the EXP, QUAD, and LIN + SQ results prove to be virtually indistinguishable from one another. For all the approximations, however, the thermodynamics from the compressibility relation are poor and the virial-theorem results are not uniformly reliable. Somewhat more surprisingly, it is found that the EXP results yield a negative structure factor S(k) for very small k in the liquid state and poor radial distribution functions at low densities. The RHNC results are nowhere worse than the EXP results and in some states (e.g., at low densities) much better. In contrast, the N3 results are better in some respects than the EXP results but worse in others. The

  14. Quantitative analysis of fungicide azoxystrobin in agricultural samples with rapid, simple and reliable monoclonal immunoassay.

    PubMed

    Watanabe, Eiki; Miyake, Shiro

    2013-01-15

    This work presents analytical performance of a kit-based direct competitive enzyme-linked immunosorbent assay (dc-ELISA) for azoxystrobin detection in agricultural products. The dc-ELISA was sufficiently sensitive for analysis of residue levels close to the maximum residue limits. The dc-ELISA did not show cross-reactivity to other strobilurin analogues. Absorbance decreased with the increase of methanol concentration in sample solution from 2% to 40%, while the standard curve became most linear when the sample solution contained 10% methanol. Agricultural samples were extracted with methanol, and the extracts were diluted with water to 10% methanol adequate. No significant matrix interference was observed. Satisfying recovery was found for all of spiked samples and the results were well agreed with the analysis with liquid chromatography. These results clearly indicate that the kit-based dc-ELISA is suitable for the rapid, simple, quantitative and reliable determination of the fungicide. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  16. MALDI-MS analysis and theoretical evaluation of olanzapine as a UV laser desorption ionization (LDI) matrix.

    PubMed

    Musharraf, Syed Ghulam; Ameer, Mariam; Ali, Arslan

    2017-01-05

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) being soft ionization technique, has become a method of choice for high-throughput analysis of proteins and peptides. In this study, we have explored the potential of atypical anti-psychotic drug olanzapine (OLZ) as a matrix for MALDI-MS analysis of peptides aided with the theoretical studies. Seven small peptides were employed as target analytes to check performance of olanzapine and compared with conventional MALDI matrix α-cyano-4-hydroxycinnamic acid (HCCA). All peptides were successfully detected when olanzapine was used as a matrix. Moreover, peptides angiotensin Ι and angiotensin ΙΙ were detected with better S/N ratio and resolution with this method as compared to their analysis by HCCA. Computational studies were performed to determine the thermochemical properties of olanzapine in order to further evaluate its similarity to MALDI matrices which were found in good agreement with the data of existing MALDI matrices. Copyright © 2016. Published by Elsevier B.V.

  17. A simple landslide susceptibility analysis for hazard and risk assessment in developing countries

    NASA Astrophysics Data System (ADS)

    Guinau, M.; Vilaplana, J. M.

    2003-04-01

    In recent years, a number of techniques and methodologies have been developed for mitigating natural disasters. The complexity of these methodologies and the scarcity of material and data series justify the need for simple methodologies to obtain the necessary information for minimising the effects of catastrophic natural phenomena. The work with polygonal maps using a GIS allowed us to develop a simple methodology, which was developed in an area of 473 Km2 in the Departamento de Chinandega (NW Nicaragua). This area was severely affected by a large number of landslides (mainly debris flows), triggered by the Hurricane Mitch rainfalls in October 1998. With the aid of aerial photography interpretation at 1:40.000 scale, amplified to 1:20.000, and detailed field work, a landslide map at 1:10.000 scale was constructed. The failure zones of landslides were digitized in order to obtain a failure zone digital map. A terrain unit digital map, in which a series of physical-environmental terrain factors are represented, was also used. Dividing the studied area into two zones (A and B) with homogeneous physical and environmental characteristics, allows us to develop the proposed methodology and to validate it. In zone A, the failure zone digital map is superimposed onto the terrain unit digital map to establish the relationship between the different terrain factors and the failure zones. The numerical expression of this relationship enables us to classify the terrain by its landslide susceptibility. In zone B, this numerical relationship was employed to obtain a landslide susceptibility map, obviating the need for a failure zone map. The validity of the methodology can be tested in this area by using the degree of superposition of the susceptibility map and the failure zone map. The implementation of the methodology in tropical countries with physical and environmental characteristics similar to those of the study area allows us to carry out a landslide susceptibility

  18. Holo-analysis.

    PubMed

    Rosen, G D

    2006-06-01

    Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.

  19. Comparative forensic soil analysis of New Jersey state parks using a combination of simple techniques with multivariate statistics.

    PubMed

    Bonetti, Jennifer; Quarino, Lawrence

    2014-05-01

    This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.

  20. Theoretical modeling of the catch-slip bond transition in biological adhesion

    NASA Astrophysics Data System (ADS)

    Gunnerson, Kim; Pereverzev, Yuriy; Prezhdo, Oleg

    2006-05-01

    The mechanism by which leukocytes leave the blood stream and enter inflamed tissue is called extravasation. This process is facilitated by the ability of selectin proteins, produced by the endothelial cells of blood vessels, to form transient bonds with the leukocytes. In the case of P-selectin, the protein bonds with P-selectin glycoprotein ligands (PSGL-1) produced by the leukocyte. Recent atomic force microscopy and flow chamber analyses of the binding of P-selectin to PSGL-1 provide evidence for an unusual biphasic catch-bond/slip-bond behavior in response to the strength of exerted force. This biphasic process is not well-understood. There are several theoretical models for describing this phenomenon. These models use different profiles for potential energy landscapes and how they change under forces. We are exploring these changes using molecular dynamics. We will present a simple theoretical model as well as share some of our early MD results for describing this phenomenon.

  1. Analysis in temporal regime of dispersive invisible structures designed from transformation optics

    NASA Astrophysics Data System (ADS)

    Gralak, B.; Arismendi, G.; Avril, B.; Diatta, A.; Guenneau, S.

    2016-03-01

    A simple invisible structure made of two anisotropic homogeneous layers is analyzed theoretically in temporal regime. The frequency dispersion is introduced and analytic expression of the transient part of the field is derived for large times when the structure is illuminated by a causal excitation. This expression shows that the limiting amplitude principle applies with transient fields decaying as the power -3 /4 of the time. The quality of the cloak is then reduced at short times and remains preserved at large times. The one-dimensional theoretical analysis is supplemented with full-wave numerical simulations in two-dimensional situations which confirm the effect of dispersion.

  2. A theoretical basis for the analysis of multiversion software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  3. Simple spatial scaling rules behind complex cities.

    PubMed

    Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2017-11-28

    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.

  4. Interatomic potential at small internuclear distances. A simple formula for the screening constant

    NASA Astrophysics Data System (ADS)

    Zinoviev, A. N.

    2017-09-01

    A simple formula for estimating the screening constant has been proposed. This formula fits well experimental data on the interaction potentials. Quantitative description of the experiment for the effect of electronic screening on the nuclear synthesis reaction cross-section for the D+-D system has been obtained. A conclusion has been made that the differences between the measured cross-sections and their theoretically predicted values, which take place in more complicated cases nuclear synthesis reactions, are not caused by uncertainties in the knowledge of potentials.

  5. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-28

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to providemore » better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.« less

  6. Isospectral drums and simple groups

    NASA Astrophysics Data System (ADS)

    Thas, Koen

    Nearly every known pair of isospectral but nonisometric manifolds — with as most famous members isospectral bounded ℝ-planar domains which makes one “not hear the shape of a drum” [M. Kac, Can one hear the shape of a drum? Amer. Math. Monthly 73(4 part 2) (1966) 1-23] — arise from the (group theoretical) Gassmann-Sunada method. Moreover, all the known ℝ-planar examples (so counter examples to Kac’s question) are constructed through a famous specialization of this method, called transplantation. We first describe a number of very general classes of length equivalent manifolds, with as particular cases isospectral manifolds, in each of the constructions starting from a given example that arises itself from the Gassmann-Sunada method. The constructions include the examples arising from the transplantation technique (and thus in particular the known planar examples). To that end, we introduce four properties — called FF, MAX, PAIR and INV — inspired by natural physical properties (which rule out trivial constructions), that are satisfied for each of the known planar examples. Vice versa, we show that length equivalent manifolds with FF, MAX, PAIR and INV which arise from the Gassmann-Sunada method, must fall under one of our prior constructions, thus describing a precise classification of these objects. Due to the nature of our constructions and properties, a deep connection with finite simple groups occurs which seems, perhaps, rather surprising in the context of this paper. On the other hand, our properties define in some sense physically irreducible pairs of length equivalent manifolds — “atoms” of general pairs of length equivalent manifolds, in that such a general pair of manifolds is patched up out of irreducible pairs — and that is precisely what simple groups are for general groups.

  7. Simultaneous pre-concentration and separation on simple paper-based analytical device for protein analysis.

    PubMed

    Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong

    2018-02-01

    In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.

  8. Theoretical analysis of AlGaN/GaN resonant tunnelling diodes with step heterojunctions spacer and sub-quantum well

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Gao, B.; Gong, M.

    2017-06-01

    In this paper, we proposed to use step heterojunctions emitter spacer (SHES) and InGaN sub-quantum well in AlGaN/GaN/AlGaN double barrier resonant tunnelling diodes (RTDs). Theoretical analysis of RTD with SHES and InGaN sub-quantum well was presented, which indicated that the negative differential resistance (NDR) characteristic was improved. And the simulation results, peak current density JP=82.67 mA/μm2, the peak-to-valley current ratio PVCR=3.38, and intrinsic negative differential resistance RN=-0.147Ω at room temperature, verified the improvement of NDR characteristic brought about by SHES and InGaN sub-quantum well. Both the theoretical analysis and simulation results showed that the device performance, especially the average oscillator output power presented great improvement and reached 2.77mW/μm2 magnitude. And the resistive cut-off frequency would benefit a lot from the relatively small RN as well. Our works provide an important alternative to the current approaches in designing new structure GaN based RTD for practical high frequency and high power applications.

  9. Theoretical and experimental analyses to determine the effects of crystal orientation and grain size on the thermoelectric properties of oblique deposited bismuth telluride thin films

    NASA Astrophysics Data System (ADS)

    Morikawa, Satoshi; Satake, Yuji; Takashiri, Masayuki

    2018-06-01

    The effects of crystal orientation and grain size on the thermoelectric properties of Bi2Te3 thin films were investigated by conducting experimental and theoretical analyses. To vary the crystal orientation and grain size, we performed oblique deposition, followed by thermal annealing treatment. The crystal orientation decreased as the oblique angle was increased, while the grain size was not changed significantly. The thermoelectric properties were measured at room temperature. A theoretical analysis was performed using a first principles method based on density functional theory. Then the semi-classical Boltzmann transport equation was used in the relaxation time approximation, with the effect of grain size included. Furthermore, the effect of crystal orientation was included in the calculation based on a simple semi-experimental model. A maximum power factor of 11.6 µW/(cm·K2) was obtained at an oblique angle of 40°. The calculated thermoelectric properties were in very good agreement with the experimentally measured values.

  10. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  11. Variable stoichiometry in active ion transport: theoretical analysis of physiological consequences.

    PubMed Central

    Johnson, E A; Tanford, C; Reynolds, J A

    1985-01-01

    Active ion transport systems with fixed stoichiometry are subject to a thermodynamic limit on the ion concentration gradients that they can generate and maintain, and their net rates of transport must inevitably decrease as this limit is approached. The capability to vary stoichiometry might thus be physiologically advantageous: a shift to lower stoichiometry (fewer ions pumped per reaction cycle) at increasing thermodynamic load could increase the limit on the supportable concentration gradient and could accelerate the rate of transport under high-load conditions. Here we present a theoretical and numerical analysis of this possibility, using the sarcoplasmic reticulum ATP-driven Ca pump as the example. It is easy to introduce alternate pathways into the reaction cycle for this system to shift the stoichiometry (Ca2+/ATP) from the normal value of 2:1 to 1:1, but it cannot be done without simultaneous generation of a pathway for uncoupled leak of Ca2+ across the membrane. This counteracts the advantageous effect of the change in transport stoichiometry and a physiologically useful rate acceleration cannot be obtained. This result is likely to be generally applicable to most active transport systems. PMID:3860866

  12. Variable stoichiometry in active ion transport: theoretical analysis of physiological consequences.

    PubMed

    Johnson, E A; Tanford, C; Reynolds, J A

    1985-08-01

    Active ion transport systems with fixed stoichiometry are subject to a thermodynamic limit on the ion concentration gradients that they can generate and maintain, and their net rates of transport must inevitably decrease as this limit is approached. The capability to vary stoichiometry might thus be physiologically advantageous: a shift to lower stoichiometry (fewer ions pumped per reaction cycle) at increasing thermodynamic load could increase the limit on the supportable concentration gradient and could accelerate the rate of transport under high-load conditions. Here we present a theoretical and numerical analysis of this possibility, using the sarcoplasmic reticulum ATP-driven Ca pump as the example. It is easy to introduce alternate pathways into the reaction cycle for this system to shift the stoichiometry (Ca2+/ATP) from the normal value of 2:1 to 1:1, but it cannot be done without simultaneous generation of a pathway for uncoupled leak of Ca2+ across the membrane. This counteracts the advantageous effect of the change in transport stoichiometry and a physiologically useful rate acceleration cannot be obtained. This result is likely to be generally applicable to most active transport systems.

  13. Theoretical Framework of Leadership in Higher Education of England and Wales

    ERIC Educational Resources Information Center

    Mukan, Nataliya; Havrylyuk, Marianna; Stolyarchuk, Lesia

    2015-01-01

    In the article the theoretical framework of leadership in higher education of England and Wales has been studied. The main objectives of the article are defined as analysis of scientific and pedagogical literature, which highlights different aspects of the problem under research; characteristic of the theoretical fundamentals of educational…

  14. Theoretical and observational constraints on Tachyon Inflation

    NASA Astrophysics Data System (ADS)

    Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel; Hidalgo, Juan Carlos; Rigel Mora-Luna, Refugio

    2018-03-01

    We constrain several models in Tachyonic Inflation derived from the large-N formalism by considering theoretical aspects as well as the latest observational data. On the theoretical side, we assess the field range of our models by means of the excursion of the equivalent canonical field. On the observational side, we employ BK14+PLANCK+BAO data to perform a parameter estimation analysis as well as a Bayesian model selection to distinguish the most favoured models among all four classes here presented. We observe that the original potential V propto sech(T) is strongly disfavoured by observations with respect to a reference model with flat priors on inflationary observables. This realisation of Tachyon inflation also presents a large field range which may demand further quantum corrections. We also provide examples of potentials derived from the polynomial and the perturbative classes which are both statistically favoured and theoretically acceptable.

  15. Noise studies of communication systems using the SYSTID computer aided analysis program

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Dawson, C. T.

    1973-01-01

    SYSTID computer aided design is a simple program for simulating data systems and communication links. A trial of the efficiency of the method was carried out by simulating a linear analog communication system to determine its noise performance and by comparing the SYSTID result with the result arrived at by theoretical calculation. It is shown that the SYSTID program is readily applicable to the analysis of these types of systems.

  16. Simple Spreadsheet Models For Interpretation Of Fractured Media Tracer Tests

    EPA Science Inventory

    An analysis of a gas-phase partitioning tracer test conducted through fractured media is discussed within this paper. The analysis employed matching eight simple mathematical models to the experimental data to determine transport parameters. All of the models tested; two porous...

  17. Simple proteomics data analysis in the object-oriented PowerShell.

    PubMed

    Mohammed, Yassene; Palmblad, Magnus

    2013-01-01

    Scripting languages such as Perl and Python are appreciated for solving simple, everyday tasks in bioinformatics. A more recent, object-oriented command shell and scripting language, Windows PowerShell, has many attractive features: an object-oriented interactive command line, fluent navigation and manipulation of XML files, ability to consume Web services from the command line, consistent syntax and grammar, rich regular expressions, and advanced output formatting. The key difference between classical command shells and scripting languages, such as bash, and object-oriented ones, such as PowerShell, is that in the latter the result of a command is a structured object with inherited properties and methods rather than a simple stream of characters. Conveniently, PowerShell is included in all new releases of Microsoft Windows and therefore already installed on most computers in classrooms and teaching labs. In this chapter we demonstrate how PowerShell in particular allows easy interaction with mass spectrometry data in XML formats, connection to Web services for tools such as BLAST, and presentation of results as formatted text or graphics. These features make PowerShell much more than "yet another scripting language."

  18. Theoretical vibrational spectra of diformates: Diformate anion

    NASA Astrophysics Data System (ADS)

    Dobrowolski, Jan Cz.; Jamróz, Michał H.; Kazimirski, Jan K.; Bajdor, Krzysztof; Borowiak, Marek A.; Larsson, Ragnar

    1999-05-01

    The IR spectrum of the most stable diformate anion was calculated at the MP2/6-311++G(3df, 3pd), RHF/6-311++G **, and B3PW91/6-311++G ** levels. The internal coordinates were defined for the diformate anion and used in potential energy distribution (PED) analysis. The PED analysis of the theoretical spectra form the basis for elucidation of the future matrix isolation IR spectra.

  19. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…

  1. Sound transmission through lightweight double-leaf partitions: theoretical modelling

    NASA Astrophysics Data System (ADS)

    Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.

    2005-09-01

    This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.

  2. A Simple Diagnostic Model of the Circulation Beneath an Ice Shelf

    NASA Astrophysics Data System (ADS)

    Jenkins, Adrian; Nøst, Ole Anders

    2017-04-01

    The ocean circulation beneath ice shelves supplies the heat required to melt ice and exports the resulting freshwater. It therefore plays a key role in determining the mass balance and geometry of the ice shelves and hence the restraint they impose on the outflow of grounded ice from the interior of the ice sheet. Despite this critical role in regulating the ice sheet's contribution to eustatic sea level, an understanding of some of the most basic features of the circulation is lacking. The conventional paradigm is one of a buoyancy-forced overturning circulation, with inflow of warm, salty water along the seabed and outflow of cooled and freshened waters along the ice base. However, most sub-ice-shelf cavities are broad relative to the internal Rossby radius, so a horizontal circulation accompanies the overturning. Primitive equation ocean models applied to idealised geometries produce cyclonic gyres of comparable magnitude, but in the absence of a theoretical understanding of what controls the gyre strength, those solutions can only be validated against each other. Furthermore, we have no understanding of how the gyre circulation should change given more complex geometries. To begin to address this gap in our theoretical understanding we present a simple, linear, steady-state model for the circulation beneath an ice shelf. Our approach in analogous to that of Stommel's classic analysis of the wind-driven gyres, but is complicated by the fact that his most basic assumption of homogeneity is inappropriate. The only forcing on the flow beneath an ice shelf arises because of the horizontal density gradients set up by melting. We thus arrive at a diagnostic model which gives us the depth-dependent horizontal circulation that results from an imposed geometry and density distribution. We describe the development of the model and present some preliminary solutions for the simplest cavity geometries.

  3. Simple cost analysis of a rural dental training facility in Australia.

    PubMed

    Lalloo, Ratilal; Massey, Ward

    2013-06-01

    Student clinical placements away from the university dental school clinics are an integral component of dental training programs. In 2009, the School of Dentistry and Oral Health, Griffith University, commenced a clinical placement in a remote rural community in Australia. This paper presents a simple cost analysis of the project from mid-2008 to mid-2011. All expenditures of the project are audited by the Financial and Planning Services unit of the university. The budget was divided into capital and operational costs, and the latter were further subdivided into salary and non-salary costs, and these were further analysed for the various types of expenditures incurred. The value of the treatments provided and income generated is also presented. Remote rural placements have additional (to the usual university dental clinic) costs in terms of salary incentives, travel, accommodation and subsistence support. However, the benefits of the placement to both the students and the local community might outweigh the additional costs of the placement. Because of high costs of rural student clinical placements, the financial support of partners, including the local Shire Council, state/territory and Commonwealth governments, is crucial in the establishment and ongoing sustainability of rural dental student clinical placements. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.

  4. A Simple PCR Method for Rapid Genotype Analysis of Mycobacterium ulcerans

    PubMed Central

    Stinear, Timothy; Davies, John K.; Jenkin, Grant A.; Portaels, Françoise; Ross, Bruce C.; OppEdIsano, Frances; Purcell, Maria; Hayman, John A.; Johnson, Paul D. R.

    2000-01-01

    Two high-copy-number insertion sequences, IS2404 and IS2606, were recently identified in Mycobacterium ulcerans and were shown by Southern hybridization to possess restriction fragment length polymorphism between strains from different geographic origins. We have designed a simple genotyping method that captures these differences by PCR amplification of the region between adjacent copies of IS2404 and IS2606. We have called this system 2426 PCR. The method is rapid, reproducible, sensitive, and specific for M. ulcerans, and it has confirmed previous studies suggesting a clonal population structure of M. ulcerans within a geographic region. M. ulcerans isolates from Australia, Papua New Guinea, Malaysia, Surinam, Mexico, Japan, China, and several countries in Africa were easily differentiated based on an array of 4 to 14 PCR products ranging in size from 200 to 900 bp. Numerical analysis of the banding patterns suggested a close evolutionary link between M. ulcerans isolates from Africa and southeast Asia. The application of 2426 PCR to total DNA, extracted directly from M. ulcerans-infected tissue specimens without culture, demonstrated the sensitivity and specificity of this method and confirmed for the first time that both animal and human isolates from areas of endemicity in southeast Australia have the same genotype. PMID:10747130

  5. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    PubMed

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  6. Theoretical and experimental study of a fiber optic microphone

    NASA Technical Reports Server (NTRS)

    Hu, Andong; Cuomo, Frank W.; Zuckerwar, Allan J.

    1992-01-01

    Modifications to condenser microphone theory yield new expressions for the membrane deflections at its center, which provide the basic theory for the fiber optic microphone. The theoretical analysis for the membrane amplitude and the phase response of the fiber optic microphone is given in detail in terms of its basic geometrical quantities. A relevant extension to the original concepts of the optical microphone includes the addition of a backplate with holes similar in design to present condenser microphone technology. This approach generates improved damping characteristics and extended frequency response that were not previously considered. The construction and testing of the improved optical fiber microphone provide experimental data that are in good agreement with the theoretical analysis.

  7. The mechanistic basis of internal conductance: a theoretical analysis of mesophyll cell photosynthesis and CO2 diffusion.

    PubMed

    Tholen, Danny; Zhu, Xin-Guang

    2011-05-01

    Photosynthesis is limited by the conductance of carbon dioxide (CO(2)) from intercellular spaces to the sites of carboxylation. Although the concept of internal conductance (g(i)) has been known for over 50 years, shortcomings in the theoretical description of this process may have resulted in a limited understanding of the underlying mechanisms. To tackle this issue, we developed a three-dimensional reaction-diffusion model of photosynthesis in a typical C(3) mesophyll cell that includes all major components of the CO(2) diffusion pathway and associated reactions. Using this novel systems model, we systematically and quantitatively examined the mechanisms underlying g(i). Our results identify the resistances of the cell wall and chloroplast envelope as the most significant limitations to photosynthesis. In addition, the concentration of carbonic anhydrase in the stroma may also be limiting for the photosynthetic rate. Our analysis demonstrated that higher levels of photorespiration increase the apparent resistance to CO(2) diffusion, an effect that has thus far been ignored when determining g(i). Finally, we show that outward bicarbonate leakage through the chloroplast envelope could contribute to the observed decrease in g(i) under elevated CO(2). Our analysis suggests that physiological and anatomical features associated with g(i) have been evolutionarily fine-tuned to benefit CO(2) diffusion and photosynthesis. The model presented here provides a novel theoretical framework to further analyze the mechanisms underlying diffusion processes in the mesophyll.

  8. Teaching Strategic Thinking on Oligopoly: Classroom Activity and Theoretic Analysis

    ERIC Educational Resources Information Center

    Han, Yongseung; Ryan, Michael

    2017-01-01

    This paper examines the use of a simple classroom activity, in which students are asked to take action representing either collusion or competition for extra credit to teach strategic thinking required in an oligopolistic market. We suggest that the classroom activity is first initiated prior to the teaching of oligopoly and then the instructor…

  9. Further theoretical studies of modified cyclone separator as a diesel soot particulate emission arrester.

    PubMed

    Mukhopadhyay, N; Bose, P K

    2009-10-01

    Soot particulate emission reduction from diesel engine is one of the most emerging problems associated with the exhaust pollution. Diesel particulate filters (DPF) hold out the prospects of substantially reducing regulated particulate emissions but the question of the reliable regeneration of filters still remains a difficult hurdle to overcome. Many of the solutions proposed to date suffer from design complexity, cost, regeneration problem and energy demands. This study presents a computer aided theoretical analysis for controlling diesel soot particulate emission by cyclone separator--a non contact type particulate removal system considering outer vortex flow, inner vortex flow and packed ceramic fiber filter at the end of vortex finder tube. Cyclone separator with low initial cost, simple construction produces low back pressure and reasonably high collection efficiencies with reduced regeneration problems. Cyclone separator is modified by placing a continuous ceramic packed fiber filter placed at the end of the vortex finder tube. In this work, the grade efficiency model of diesel soot particulate emission is proposed considering outer vortex, inner vortex and the continuous ceramic packed fiber filter. Pressure drop model is also proposed considering the effect of the ceramic fiber filter. Proposed model gives reasonably good collection efficiency with permissible pressure drop limit of diesel engine operation. Theoretical approach is predicted for calculating the cut size diameter considering the effect of Cunningham molecular slip correction factor. The result shows good agreements with existing cyclone and DPF flow characteristics.

  10. Large deviation analysis of a simple information engine

    NASA Astrophysics Data System (ADS)

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  11. Eradication of Helicobacter pylori for prevention of ulcer recurrence after simple closure of perforated peptic ulcer: a meta-analysis of randomized controlled trials.

    PubMed

    Wong, Chung-Shun; Chia, Chee-Fah; Lee, Hung-Chia; Wei, Po-Li; Ma, Hon-Ping; Tsai, Shin-Han; Wu, Chih-Hsiung; Tam, Ka-Wai

    2013-06-15

    Eradication of Helicobacter pylori has become part of the standard therapy for peptic ulcer. However, the role of H pylori eradication in perforation of peptic ulcers remains controversial. It is unclear whether eradication of the bacterium confers prolonged ulcer remission after simple repair of perforated peptic ulcer. A systematic review and meta-analysis of randomized controlled trials was performed to evaluate the effects of H pylori eradication on prevention of ulcer recurrence after simple closure of perforated peptic ulcers. The primary outcome to evaluate these effects was the incidence of postoperative ulcers; the secondary outcome was the rate of H pylori elimination. The meta-analysis included five randomized controlled trials and 401 patients. A high prevalence of H pylori infection occurred in patients with perforated peptic ulcers. Eradication of H pylori significantly reduced the incidence of ulcer recurrence at 8 wk (risk ratio 2.97; 95% confidence interval: 1.06-8.29) and 1 y (risk ratio 1.49; 95% confidence interval: 1.10-2.03) postoperation. The rate of H pylori eradication was significantly higher in the treatment group than in the nontreatment group. Eradication therapy should be provided to patients with H pylori infection after simple closure of perforated gastroduodenal ulcers. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. A Simple Force-Motion Relation for Migrating Cells Revealed by Multipole Analysis of Traction Stress

    PubMed Central

    Tanimoto, Hirokazu; Sano, Masaki

    2014-01-01

    For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. PMID:24411233

  13. Graph theoretical analysis of functional network for comprehension of sign language.

    PubMed

    Liu, Lanfang; Yan, Xin; Liu, Jin; Xia, Mingrui; Lu, Chunming; Emmorey, Karen; Chu, Mingyuan; Ding, Guosheng

    2017-09-15

    Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t (24) =2.379, p=0.026), small-worldness (t (24) =2.604, p=0.016) and modularity (t (24) =3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Theoretical and observational analysis of spacecraft fields

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.; Schatten, K. H.

    1972-01-01

    In order to investigate the nondipolar contributions of spacecraft magnetic fields a simple magnetic field model is proposed. This model consists of randomly oriented dipoles in a given volume. Two sets of formulas are presented which give the rms-multipole field components, for isotropic orientations of the dipoles at given positions and for isotropic orientations of the dipoles distributed uniformly throughout a cube or sphere. The statistical results for an 8 cu m cube together with individual examples computed numerically show the following features: Beyond about 2 to 3 m distance from the center of the cube, the field is dominated by an equivalent dipole. The magnitude of the magnetic moment of the dipolar part is approximated by an expression for equal magnetic moments or generally by the Pythagorean sum of the dipole moments. The radial component is generally greater than either of the transverse components for the dipole portion as well as for the nondipolar field contributions.

  15. Disentangling Gratitude: A Theoretical and Psychometric Examination of the Gratitude Resentment and Appreciation Test-Revised Short (GRAT-RS).

    PubMed

    Hammer, Joseph H; Brenner, Rachel E

    2017-07-14

    This study extended our theoretical and applied understanding of gratitude through a psychometric examination of the most popular multidimensional measure of gratitude, the Gratitude, Resentment, and Appreciation Test-Revised Short form (GRAT-RS). Namely, the dimensionality of the GRAT-RS, the model-based reliability of the GRAT-RS total score and 3 subscale scores, and the incremental evidence of validity for its latent factors were assessed. Dimensionality measures (e.g., explained common variance) and confirmatory factor analysis results with 426 community adults indicated that the GRAT-RS conformed to a multidimensional (bifactor) structure. Model-based reliability measures (e.g., omega hierarchical) provided support for the future use of the Lack of a Sense of Deprivation raw subscale score, but not for the raw GRAT-RS total score, Simple Appreciation subscale score, or Appreciation of Others subscale score. Structural equation modeling results indicated that only the general gratitude factor and the lack of a sense of deprivation specific factor accounted for significant variance in life satisfaction, positive affect, and distress. These findings support the 3 pillars of gratitude conceptualization of gratitude over competing conceptualizations, the position that the specific forms of gratitude are theoretically distinct, and the argument that appreciation is distinct from the superordinate construct of gratitude.

  16. New Theoretical Model of Nerve Conduction in Unmyelinated Nerves

    PubMed Central

    Akaishi, Tetsuya

    2017-01-01

    Nerve conduction in unmyelinated fibers has long been described based on the equivalent circuit model and cable theory. However, without the change in ionic concentration gradient across the membrane, there would be no generation or propagation of the action potential. Based on this concept, we employ a new conductive model focusing on the distribution of voltage-gated sodium ion channels and Coulomb force between electrolytes. Based on this new model, the propagation of the nerve conduction was suggested to take place far before the generation of action potential at each channel. We theoretically showed that propagation of action potential, which is enabled by the increasing Coulomb force produced by inflowing sodium ions, from one sodium ion channel to the next sodium channel would be inversely proportionate to the density of sodium channels on the axon membrane. Because the longitudinal number of sodium ion channel would be proportionate to the square root of channel density, the conduction velocity of unmyelinated nerves is theoretically shown to be proportionate to the square root of channel density. Also, from a viewpoint of equilibrium state of channel importation and degeneration, channel density was suggested to be proportionate to axonal diameter. Based on these simple basis, conduction velocity in unmyelinated nerves was theoretically shown to be proportionate to the square root of axonal diameter. This new model would also enable us to acquire more accurate and understandable vision on the phenomena in unmyelinated nerves in addition to the conventional electric circuit model and cable theory. PMID:29081751

  17. Experimental and theoretical analysis for improved microscope design of optical projection tomographic microscopy.

    PubMed

    Coe, Ryan L; Seibel, Eric J

    2013-09-01

    We present theoretical and experimental results of axial displacement of objects relative to a fixed condenser focal plane (FP) in optical projection tomographic microscopy (OPTM). OPTM produces three-dimensional, reconstructed images of single cells from two-dimensional projections. The cell rotates in a microcapillary to acquire projections from different perspectives where the objective FP is scanned through the cell while the condenser FP remains fixed at the center of the microcapillary. This work uses a combination of experimental and theoretical methods to improve the OPTM instrument design.

  18. Investigating student understanding of simple harmonic motion

    NASA Astrophysics Data System (ADS)

    Somroob, S.; Wattanakasiwich, P.

    2017-09-01

    This study aimed to investigate students’ understanding and develop instructional material on a topic of simple harmonic motion. Participants were 60 students taking a course on vibrations and wave and 46 students taking a course on Physics 2 and 28 students taking a course on Fundamental Physics 2 on the 2nd semester of an academic year 2016. A 16-question conceptual test and tutorial activities had been developed from previous research findings and evaluated by three physics experts in teaching mechanics before using in a real classroom. Data collection included both qualitative and quantitative methods. Item analysis and whole-test analysis were determined from student responses in the conceptual test. As results, most students had misconceptions about restoring force and they had problems connecting mathematical solutions to real motions, especially phase angle. Moreover, they had problems with interpreting mechanical energy from graphs and diagrams of the motion. These results were used to develop effective instructional materials to enhance student abilities in understanding simple harmonic motion in term of multiple representations.

  19. Theoretical and Experimental Analysis of the Physics of Water Rockets

    ERIC Educational Resources Information Center

    Barrio-Perotti, R.; Blanco-Marigorta, E.; Fernandez-Francos, J.; Galdo-Vega, M.

    2010-01-01

    A simple rocket can be made using a plastic bottle filled with a volume of water and pressurized air. When opened, the air pressure pushes the water out of the bottle. This causes an increase in the bottle momentum so that it can be propelled to fairly long distances or heights. Water rockets are widely used as an educational activity, and several…

  20. Theoretical studies of the physics of the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Hollweg, Joseph V.

    1992-01-01

    Significant advances in our theoretical basis for understanding several physical processes related to dynamical phenomena on the sun were achieved. We have advanced a new model for spicules and fibrils. We have provided a simple physical view of resonance absorption of MHD surface waves; this allowed an approximate mathematical procedure for obtaining a wealth of new analytical results which we applied to coronal heating and p-mode absorption at magnetic regions. We provided the first comprehensive models for the heating and acceleration of the transition region, corona, and solar wind. We provided a new view of viscosity under coronal conditions. We provided new insights into Alfven wave propagation in the solar atmosphere. And recently we have begun work in a new direction: parametric instabilities of Alfven waves.

  1. A Simple Engineering Analysis of Solar Particle Event High Energy Tails and Their Impact on Vehicle Design

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C., Jr.; Walker, Steven A.; Clowdsley, Martha S.

    2016-01-01

    The mathematical models for Solar Particle Event (SPE) high energy tails are constructed with several di erent algorithms. Since limited measured data exist above energies around 400 MeV, this paper arbitrarily de nes the high energy tail as any proton with an energy above 400 MeV. In order to better understand the importance of accurately modeling the high energy tail for SPE spectra, the contribution to astronaut whole body e ective dose equivalent of the high energy portions of three di erent SPE models has been evaluated. To ensure completeness of this analysis, simple and complex geometries were used. This analysis showed that the high energy tail of certain SPEs can be relevant to astronaut exposure and hence safety. Therefore, models of high energy tails for SPEs should be well analyzed and based on data if possible.

  2. Perspectives on Cybersecurity Information Sharing among Multiple Stakeholders Using a Decision-Theoretic Approach.

    PubMed

    He, Meilin; Devine, Laura; Zhuang, Jun

    2018-02-01

    The government, private sectors, and others users of the Internet are increasingly faced with the risk of cyber incidents. Damage to computer systems and theft of sensitive data caused by cyber attacks have the potential to result in lasting harm to entities under attack, or to society as a whole. The effects of cyber attacks are not always obvious, and detecting them is not a simple proposition. As the U.S. federal government believes that information sharing on cybersecurity issues among organizations is essential to safety, security, and resilience, the importance of trusted information exchange has been emphasized to support public and private decision making by encouraging the creation of the Information Sharing and Analysis Center (ISAC). Through a decision-theoretic approach, this article provides new perspectives on ISAC, and the advent of the new Information Sharing and Analysis Organizations (ISAOs), which are intended to provide similar benefits to organizations that cannot fit easily into the ISAC structure. To help understand the processes of information sharing against cyber threats, this article illustrates 15 representative information sharing structures between ISAC, government, and other participating entities, and provide discussions on the strategic interactions between different stakeholders. This article also identifies the costs of information sharing and information security borne by different parties in this public-private partnership both before and after cyber attacks, as well as the two main benefits. This article provides perspectives on the mechanism of information sharing and some detailed cost-benefit analysis. © 2017 Society for Risk Analysis.

  3. Network meta-analysis, electrical networks and graph theory.

    PubMed

    Rücker, Gerta

    2012-12-01

    Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.

    PubMed

    Hutchinson, John M C; Gigerenzer, Gerd

    2005-05-31

    The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.

  5. Analysis of the Underlying Cognitive Activity in the Resolution of a Task on Derivability of the Absolute-Value Function: Two Theoretical Perspectives

    ERIC Educational Resources Information Center

    Pino-Fan, Luis R.; Guzmán, Ismenia; Font, Vicenç; Duval, Raymond

    2017-01-01

    This paper presents a study of networking of theories between the theory of registers of semiotic representation (TRSR) and the onto-semiotic approach of mathematical cognition and instruction (OSA). The results obtained show complementarities between these two theoretical perspectives, which might allow more detailed analysis of the students'…

  6. Alfven Simple Waves

    NASA Astrophysics Data System (ADS)

    Webb, G. M.; Zank, G. P.; Burrows, R.

    2009-12-01

    Multi-dimensional Alfvén simple waves in magnetohydrodynamics (MHD) are investigated using Boillat's formalism. For simple wave solutions, all physical variables (the gas density, pressure, fluid velocity, entropy, and magnetic field induction in the MHD case) depend on a single phase function ǎrphi which is a function of the space and time variables. The simple wave ansatz requires that the wave normal and the normal speed of the wave front depend only on the phase function ǎrphi. This leads to an implicit equation for the phase function, and a generalisation of the concept of a plane wave. We obtain examples of Alfvén simple waves, based on the right eigenvector solutions for the Alfvén mode. The Alfvén mode solutions have six integrals, namely that the entropy, density, magnetic pressure and the group velocity (the sum of the Alfvén and fluid velocity) are constant throughout the wave. The eigen-equations require that the rate of change of the magnetic induction B with ǎrphi throughout the wave is perpendicular to both the wave normal n and B. Methods to construct simple wave solutions based on specifying either a solution ansatz for n(ǎrphi) or B(ǎrphi) are developed.

  7. Analysis and Modeling of the Arctic Oscillation Using a Simple Barotropic Model with Baroclinic Eddy Forcing.

    NASA Astrophysics Data System (ADS)

    Tanaka, H. L.

    2003-06-01

    In this study, a numerical simulation of the Arctic Oscillation (AO) is conducted using a simple barotropic model that considers the barotropic-baroclinic interactions as the external forcing. The model is referred to as a barotropic S model since the external forcing is obtained statistically from the long-term historical data, solving an inverse problem. The barotropic S model has been integrated for 51 years under a perpetual January condition and the dominant empirical orthogonal function (EOF) modes in the model have been analyzed. The results are compared with the EOF analysis of the barotropic component of the real atmosphere based on the daily NCEP-NCAR reanalysis for 50 yr from 1950 to 1999.According to the result, the first EOF of the model atmosphere appears to be the AO similar to the observation. The annular structure of the AO and the two centers of action at Pacific and Atlantic are simulated nicely by the barotropic S model. Therefore, the atmospheric low-frequency variabilities have been captured satisfactorily even by the simple barotropic model.The EOF analysis is further conducted to the external forcing of the barotropic S model. The structure of the dominant forcing shows the characteristics of synoptic-scale disturbances of zonal wavenumber 6 along the Pacific storm track. The forcing is induced by the barotropic-baroclinic interactions associated with baroclinic instability.The result suggests that the AO can be understood as the natural variability of the barotropic component of the atmosphere induced by the inherent barotropic dynamics, which is forced by the barotropic-baroclinic interactions. The fluctuating upscale energy cascade from planetary waves and synoptic disturbances to the zonal motion plays the key role for the excitation of the AO.

  8. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  9. A simple finite element method for the Stokes equations

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-03-21

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  10. A simple finite element method for the Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  11. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  12. High-throughput method for macrolides and lincosamides antibiotics residues analysis in milk and muscle using a simple liquid-liquid extraction technique and liquid chromatography-electrospray-tandem mass spectrometry analysis (LC-MS/MS).

    PubMed

    Jank, Louise; Martins, Magda Targa; Arsand, Juliana Bazzan; Campos Motta, Tanara Magalhães; Hoff, Rodrigo Barcellos; Barreto, Fabiano; Pizzolato, Tânia Mara

    2015-11-01

    A fast and simple method for residue analysis of the antibiotics classes of macrolides (erythromycin, azithromycin, tylosin, tilmicosin and spiramycin) and lincosamides (lincomycin and clindamycin) was developed and validated for cattle, swine and chicken muscle and for bovine milk. Sample preparation consists in a liquid-liquid extraction (LLE) with acetonitrile, followed by liquid chromatography-electrospray-tandem mass spectrometry analysis (LC-ESI-MS/MS), without the need of any additional clean-up steps. Chromatographic separation was achieved using a C18 column and a mobile phase composed by acidified acetonitrile and water. The method was fully validated according the criteria of the Commission Decision 2002/657/EC. Validation parameters such as limit of detection, limit of quantification, linearity, accuracy, repeatability, specificity, reproducibility, decision limit (CCα) and detection capability (CCβ) were evaluated. All calculated values met the established criteria. Reproducibility values, expressed as coefficient of variation, were all lower than 19.1%. Recoveries range from 60% to 107%. Limits of detection were from 5 to 25 µg kg(-1).The present method is able to be applied in routine analysis, with adequate time of analysis, low cost and a simple sample preparation protocol. Copyright © 2015. Published by Elsevier B.V.

  13. Experimental and theoretical investigation of the stability of stepwise pH gradients in continuous flow electrophoresis

    NASA Technical Reports Server (NTRS)

    Kuhn, Reinhard; Wagner, Horst; Mosher, Richard A.; Thormann, Wolfgang

    1987-01-01

    Isoelectric focusing in the continuous flow mode can be more quickly and economically performed by admitting a stepwise pH gradient composed of simple buffers instead of uniform mixtures of synthetic carrier ampholytes. The time-consuming formation of the pH gradient by the electric field is thereby omitted. The stability of a three-step system with arginine - morpholinoethanesulfonic acid/glycylglycine - aspartic acid is analyzed theoretically by one-dimensional computer simulation as well as experimentally at various flow rates in a continuous flow apparatus. Excellent agreement between experimental and theoretical data was obtained. This metastable configuration was found to be suitable for focusing of proteins under continuous flow conditions. The influence of various combinations of electrolytes and membranes between electrophoresis chamber and electrode compartments is also discussed.

  14. IOTA simple rules in differentiating between benign and malignant ovarian tumors.

    PubMed

    Tantipalakorn, Charuwan; Wanapirak, Chanane; Khunamornpong, Surapan; Sukpan, Kornkanok; Tongsong, Theera

    2014-01-01

    To evaluate the diagnostic performance of IOTA simple rules in differentiating between benign and malignant ovarian tumors. A study of diagnostic performance was conducted on women scheduled for elective surgery due to ovarian masses between March 2007 and March 2012. All patients underwent ultrasound examination for IOTA simple rules within 24 hours of surgery. All examinations were performed by the authors, who had no any clinical information of the patients, to differentiate between benign and malignant adnexal masses using IOTA simple rules. Gold standard diagnosis was based on pathological or operative findings. A total of 398 adnexal masses, in 376 women, were available for analysis. Of them, the IOTA simple rules could be applied in 319 (80.1%) including 212 (66.5%) benign tumors and 107 (33.6%) malignant tumors. The simple rules yielded inconclusive results in 79 (19.9%) masses. In the 319 masses for which the IOTA simple rules could be applied, sensitivity was 82.9% and specificity 95.3%. The IOTA simple rules have high diagnostic performance in differentiating between benign and malignant adnexal masses. Nevertheless, inconclusive results are relatively common.

  15. Computational and theoretical analysis of free surface flow in a thin liquid film under zero and normal gravity

    NASA Technical Reports Server (NTRS)

    Faghri, Amir; Swanson, Theodore D.

    1988-01-01

    The results of a numerical computation and theoretical analysis are presented for the flow of a thin liquid film in the presence and absence of a gravitational body force. Five different flow systems were used. Also presented are the governing equations and boundary conditions for the situation of a thin liquid emanating from a pressure vessel; traveling along a horizontal plate with a constant initial height and uniform initial velocity; and traveling radially along a horizontal disk with a constant initial height and uniform initial velocity.

  16. A simple parameterization for the height of maximum ozone heating rate

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Hou, Can; Li, Jiangnan; Liu, Renqiang; Liu, Cuiping

    2017-12-01

    It is well-known that the height of the maximum ozone heating rate is much higher than the height of the maximum ozone concentration in the stratosphere. However, it lacks an analytical expression to explain it. A simple theoretical model has been proposed to calculate the height of maximum ozone heating rate and further understand this phenomenon. Strong absorption of ozone causes the incoming solar flux to be largely attenuated before reaching the location of the maximum ozone concentration. By comparing with the exact radiative transfer calculations, the heights of the maximum ozone heating rate produced by the theoretical model are generally very close to the true values. When the cosine of solar zenith angle μ0 = 1.0 , in US Standard atmosphere, the heights of the maximum ozone heating rate by the theoretical model are 41.4 km in the band 0.204-0.233 μm, 47.9 km in the band 0.233-0.270 μm, 44.5 km in the band 0.270-0.286 μm, 37.1 km in the band 0.286-0.303 μm, and 30.2 km in the band 0.303-0.323 μm, respectively. The location of the maximum ozone heating rate is sensitive to the solar spectral range. In band 1, the heights of the maximum ozone heating rate by the theoretical model are 52.3 km for μ0 = 0.1 , 47.1 km for μ0 = 0.3 , 44.6 km for μ0 = 0.5 , 43.1 km for μ0 = 0.7 , 41.9 km for μ0 = 0.9 , 41.4 km for μ0 = 1.0 in US Standard atmosphere, respectively. This model also illustrates that the location of the maximum ozone heating rate is sensitive to the solar zenith angle.

  17. Theoretical analysis of oxygen diffusion at startup in an alkali metal heat pipe with gettered alloy walls

    NASA Technical Reports Server (NTRS)

    Tower, L. K.

    1973-01-01

    The diffusion of oxygen into, or out of, a gettered alloy exposed to oxygenated alkali liquid metal coolant, a situation arising in some high temperature heat transfer systems, was analyzed. The relation between the diffusion process and the thermochemistry of oxygen in the alloy and in the alkali metal was developed by making several simplifying assumptions. The treatment is therefore theoretical in nature. However, a practical example pertaining to the startup of a heat pipe with walls of T-111, a tantalum alloy, and lithium working fluid illustrates the use of the figures contained in the analysis.

  18. Dense simple plasmas as high-temperature liquid simple metals

    NASA Technical Reports Server (NTRS)

    Perrot, F.

    1990-01-01

    The thermodynamic properties of dense plasmas considered as high-temperature liquid metals are studied. An attempt is made to show that the neutral pseudoatom picture of liquid simple metals may be extended for describing plasmas in ranges of densities and temperatures where their electronic structure remains 'simple'. The primary features of the model when applied to plasmas include the temperature-dependent self-consistent calculation of the electron charge density and the determination of a density and temperature-dependent ionization state.

  19. Error analysis of stochastic gradient descent ranking.

    PubMed

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  20. An acoustic experimental and theoretical investigation of single disc propellers

    NASA Technical Reports Server (NTRS)

    Bumann, Elizabeth A.; Korkan, Kenneth D.

    1989-01-01

    An experimental study of the acoustic field associated with two, three, and four blade propeller configurations with a blade root angle of 50 deg was performed in the Texas A&M University 5 ft. x 6 ft. acoustically-insulated subsonic wind tunnel. A waveform analysis package was utilized to obtain experimental acoustic time histories, frequency spectra, and overall sound pressure level (OASPL) and served as a basis for comparison to the theoretical acoustic compact source theory of Succi (1979). Valid for subsonic tip speeds, the acoustic analysis replaced each blade by an array of spiraling point sources which exhibited a unique force vector and volume. The computer analysis of Succi was modified to include a propeller performance strip analysis which used a NACA 4-digit series airfoil data bank to calculate lift and drag for each blade segment given the geometry and motion of the propeller. Theoretical OASPL predictions were found to moderately overpredict experimental values for all operating conditions and propeller configurations studied.

  1. Theoretical Coalescence: A Method to Develop Qualitative Theory

    PubMed Central

    Morse, Janice M.

    2018-01-01

    Background Qualitative research is frequently context bound, lacks generalizability, and is limited in scope. Objectives The purpose of this article was to describe a method, theoretical coalescence, that provides a strategy for analyzing complex, high-level concepts and for developing generalizable theory. Theoretical coalescence is a method of theoretical expansion, inductive inquiry, of theory development, that uses data (rather than themes, categories, and published extracts of data) as the primary source for analysis. Here, using the development of the lay concept of enduring as an example, I explore the scientific development of the concept in multiple settings over many projects and link it within the Praxis Theory of Suffering. Methods As comprehension emerges when conducting theoretical coalescence, it is essential that raw data from various different situations be available for reinterpretation/reanalysis and comparison to identify the essential features of the concept. The concept is then reconstructed, with additional inquiry that builds description, and evidence is conducted and conceptualized to create a more expansive concept and theory. Results By utilizing apparently diverse data sets from different contexts that are linked by certain characteristics, the essential features of the concept emerge. Such inquiry is divergent and less bound by context yet purposeful, logical, and with significant pragmatic implications for practice in nursing and beyond our discipline. Conclusion Theoretical coalescence is a means by which qualitative inquiry is broadened to make an impact, to accommodate new theoretical shifts and concepts, and to make qualitative research applied and accessible in new ways. PMID:29360688

  2. A simple protocol for protein extraction of recalcitrant fruit tissues suitable for 2-DE and MS analysis.

    PubMed

    Song, Jun; Braun, Gordon; Bevis, Eric; Doncaster, Kristen

    2006-08-01

    Fruit tissues are considered recalcitrant plant tissue for proteomic analysis. Three phenol-free protein extraction procedures for 2-DE were compared and evaluated on apple fruit proteins. Incorporation of hot SDS buffer, extraction with TCA/acetone precipitation was found to be the most effective protocol. The results from SDS-PAGE and 2-DE analysis showed high quality proteins. More than 500 apple polypeptides were separated on a small scale 2-DE gel. The successful protocol was further tested on banana fruit, in which 504 and 386 proteins were detected in peel and flesh tissues, respectively. To demonstrate the quality of the extracted proteins, several protein spots from apple and banana peels were cut from 2-DE gels, analyzed by MS and have been tentatively identified. The protocol described in this study is a simple procedure which could be routinely used in proteomic studies of many types of recalcitrant fruit tissues.

  3. Predicting the risk of malignancy in adnexal masses based on the Simple Rules from the International Ovarian Tumor Analysis group.

    PubMed

    Timmerman, Dirk; Van Calster, Ben; Testa, Antonia; Savelli, Luca; Fischerova, Daniela; Froyman, Wouter; Wynants, Laure; Van Holsbeke, Caroline; Epstein, Elisabeth; Franchi, Dorella; Kaijser, Jeroen; Czekierdowski, Artur; Guerriero, Stefano; Fruscio, Robert; Leone, Francesco P G; Rossi, Alberto; Landolfo, Chiara; Vergote, Ignace; Bourne, Tom; Valentin, Lil

    2016-04-01

    Accurate methods to preoperatively characterize adnexal tumors are pivotal for optimal patient management. A recent metaanalysis concluded that the International Ovarian Tumor Analysis algorithms such as the Simple Rules are the best approaches to preoperatively classify adnexal masses as benign or malignant. We sought to develop and validate a model to predict the risk of malignancy in adnexal masses using the ultrasound features in the Simple Rules. This was an international cross-sectional cohort study involving 22 oncology centers, referral centers for ultrasonography, and general hospitals. We included consecutive patients with an adnexal tumor who underwent a standardized transvaginal ultrasound examination and were selected for surgery. Data on 5020 patients were recorded in 3 phases from 2002 through 2012. The 5 Simple Rules features indicative of a benign tumor (B-features) and the 5 features indicative of malignancy (M-features) are based on the presence of ascites, tumor morphology, and degree of vascularity at ultrasonography. Gold standard was the histopathologic diagnosis of the adnexal mass (pathologist blinded to ultrasound findings). Logistic regression analysis was used to estimate the risk of malignancy based on the 10 ultrasound features and type of center. The diagnostic performance was evaluated by area under the receiver operating characteristic curve, sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), positive predictive value (PPV), negative predictive value (NPV), and calibration curves. Data on 4848 patients were analyzed. The malignancy rate was 43% (1402/3263) in oncology centers and 17% (263/1585) in other centers. The area under the receiver operating characteristic curve on validation data was very similar in oncology centers (0.917; 95% confidence interval, 0.901-0.931) and other centers (0.916; 95% confidence interval, 0.873-0.945). Risk estimates showed good calibration. In all, 23% of

  4. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    NASA Astrophysics Data System (ADS)

    Nadiga, B. T.; Urban, N. M.

    2016-12-01

    Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using

  5. Visual conspicuity: a new simple standard, its reliability, validity and applicability.

    PubMed

    Wertheim, A H

    2010-03-01

    A general standard for quantifying conspicuity is described. It derives from a simple and easy method to quantitatively measure the visual conspicuity of an object. The method stems from the theoretical view that the conspicuity of an object is not a property of that object, but describes the degree to which the object is perceptually embedded in, i.e. laterally masked by, its visual environment. First, three variations of a simple method to measure the strength of such lateral masking are described and empirical evidence for its reliability and its validity is presented, as are several tests of predictions concerning the effects of viewing distance and ambient light. It is then shown how this method yields a conspicuity standard, expressed as a number, which can be made part of a rule of law, and which can be used to test whether or not, and to what extent, the conspicuity of a particular object, e.g. a traffic sign, meets a predetermined criterion. An additional feature is that, when used under different ambient light conditions, the method may also yield an index of the amount of visual clutter in the environment. Taken together the evidence illustrates the methods' applicability in both the laboratory and in real-life situations. STATEMENT OF RELEVANCE: This paper concerns a proposal for a new method to measure visual conspicuity, yielding a numerical index that can be used in a rule of law. It is of importance to ergonomists and human factor specialists who are asked to measure the conspicuity of an object, such as a traffic or rail-road sign, or any other object. The new method is simple and circumvents the need to perform elaborate (search) experiments and thus has great relevance as a simple tool for applied research.

  6. Evolutionary synthesis of simple stellar populations. Colours and indices

    NASA Astrophysics Data System (ADS)

    Kurth, O. M.; Fritze-v. Alvensleben, U.; Fricke, K. J.

    1999-07-01

    We construct evolutionary synthesis models for simple stellar populations using the evolutionary tracks from the Padova group (1993, 1994), theoretical colour calibrations from \\cite[Lejeune et al. (1997, 1998)]{lejeune} and fit functions for stellar atmospheric indices from \\cite[Worthey et al. (1994)]{worthey}. A Monte-Carlo technique allows us to obtain a smooth time evolution of both broad band colours in UBVRIK and a series of stellar absorption features for Single Burst Stellar Populations (SSPs). We present colours and indices for SSPs with ages from 1 \\ 10(9) yrs to 1.6 \\ 10(10) yrs and metallicities [M/H]=-2.3, -1.7, -0.7, -0.4, 0.0 and 0.4. Model colours and indices at an age of about a Hubble time are in good agreement with observed colours and indices of the Galactic and M 31 GCs.

  7. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  8. Mass media and environmental issues: a theoretical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parlour, J.W.

    1980-01-01

    A critique of the weak empirical and theoretical foundations of commentaries on the mass media in the environmental literature argues that they stem from the incidental rather than fundamental concern for the social dimensions of environmental problems. The contributions of information theory, cybernetics, sociology, and political science to micro and macro theories of mass communications are reviewed. Information from empirical analyses of the mass media's portrayal of social issues, including the environment, is related to Hall's dominant ideology thesis of the mass media and the elitist-conflict model of society. It is argued that the media's portrayal of environmental issues ismore » structured by dominant power-holding groups in society with the result that the media effectively function to maintain and reinforce the status quo to the advantage of these dominant groups. 78 references.« less

  9. A simple force-motion relation for migrating cells revealed by multipole analysis of traction stress.

    PubMed

    Tanimoto, Hirokazu; Sano, Masaki

    2014-01-07

    For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Systemic Analysis Approaches for Air Transportation

    NASA Technical Reports Server (NTRS)

    Conway, Sheila

    2005-01-01

    Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.

  11. Exploratory power of the harmony search algorithm: analysis and improvements for global numerical optimization.

    PubMed

    Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K

    2011-02-01

    The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.

  12. User manual for two simple postscript output FORTRAN plotting routines

    NASA Technical Reports Server (NTRS)

    Nguyen, T. X.

    1991-01-01

    Graphics is one of the important tools in engineering analysis and design. However, plotting routines that generate output on high quality laser printers normally come in graphics packages, which tend to be expensive and system dependent. These factors become important for small computer systems or desktop computers, especially when only some form of a simple plotting routine is sufficient. With the Postscript language becoming popular, there are more and more Postscript laser printers now available. Simple, versatile, low cost plotting routines that can generate output on high quality laser printers are needed and standard FORTRAN language plotting routines using output in Postscript language seems logical. The purpose here is to explain two simple FORTRAN plotting routines that generate output in Postscript language.

  13. A Simple Method for Automated Equilibration Detection in Molecular Simulations.

    PubMed

    Chodera, John D

    2016-04-12

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure and demonstrate its utility on typical molecular simulation data.

  14. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  15. Research Techniques Made Simple: Analysis of Collective Cell Migration Using the Wound Healing Assay.

    PubMed

    Grada, Ayman; Otero-Vinas, Marta; Prieto-Castrillo, Francisco; Obagi, Zaidal; Falanga, Vincent

    2017-02-01

    Collective cell migration is a hallmark of wound repair, cancer invasion and metastasis, immune responses, angiogenesis, and embryonic morphogenesis. Wound healing is a complex cellular and biochemical process necessary to restore structurally damaged tissue. It involves dynamic interactions and crosstalk between various cell types, interaction with extracellular matrix molecules, and regulated production of soluble mediators and cytokines. In cutaneous wound healing, skin cells migrate from the wound edges into the wound to restore skin integrity. Analysis of cell migration in vitro is a useful assay to quantify alterations in cell migratory capacity in response to experimental manipulations. Although several methods exist to study cell migration (such as Boyden chamber assay, barrier assays, and microfluidics-based assays), in this short report we will explain the wound healing assay, also known as the "in vitro scratch assay" as a simple, versatile, and cost-effective method to study collective cell migration and wound healing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.

    ERIC Educational Resources Information Center

    Atkinson, Richard C.

    This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…

  17. Nastran level 16 theoretical manual updates for aeroelastic analysis of bladed discs

    NASA Technical Reports Server (NTRS)

    Elchuri, V.; Smith, G. C. C.

    1980-01-01

    A computer program based on state of the art compressor and structural technologies applied to bladed shrouded disc was developed and made operational in NASTRAN Level 16. Aeroelastic analyses, modes and flutter. Theoretical manual updates are included.

  18. A simplified analysis of the multigrid V-cycle as a fast elliptic solver

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Taasan, Shlomo

    1988-01-01

    For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.

  19. Falling Chains as Variable-Mass Systems: Theoretical Model and Experimental Analysis

    ERIC Educational Resources Information Center

    de Sousa, Celia A.; Gordo, Paulo M.; Costa, Pedro

    2012-01-01

    In this paper, we revisit, theoretically and experimentally, the fall of a folded U-chain and of a pile-chain. The model calculation implies the division of the whole system into two subsystems of variable mass, allowing us to explore the role of tensional contact forces at the boundary of the subsystems. This justifies, for instance, that the…

  20. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  1. Risk analysis of sterile production plants: a new and simple, workable approach.

    PubMed

    Gapp, Guenther; Holzknecht, Peter

    2011-01-01

    A sterile active ingredient plant and a sterile finished dosage filling plant both comprise very complex production processes and systems. The sterility of the final product cannot be assured solely by sterility testing, in-process controls, environmental monitoring of cleanrooms, and media fill validations. Based on more than 15 years experience, 4 years ago the authors created a new but very simple approach to the risk analysis of sterile plants. This approach is not a failure mode and effects analysis and therefore differs from the PDA Technical Report 44 Quality Risk Management for Aseptic Processes of 2008. The principle involves specific questions, which have been defined in the risk analysis questionnaire in advance, to be answered by an expert team. If the questionnaire item is dealt with appropriately, the answer is assigned a low-risk number (1) and if very weak or deficient it gets a high-risk number (5). In addition to the numbers, colors from green (not problematic) through orange to red (very problematic) are attributed to make the results more striking. Because the individual units of each production plant have a defined and different impact on the overall sterility of the final product, different risk emphasis factors have to be taken into account (impact factor 1, 3, or 5). In a well run cleanroom, the cleanroom operators have a lower impact than other units with regard to the contamination risk. The resulting number of the analyzed production plant and the diagram of the assessment subsequently offers very important and valuable information about a) the risk for microbiological contamination (sterility/endotoxins) of the product, and b) the compliance status of the production plant and the risk of failing lots, as well as probable observations of upcoming regulatory agency audits. Both items above are highly important for the safety of the patient. It is also an ideal tool to identify deficient or weak systems requiring improvement and upgrade

  2. EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.

    ERIC Educational Resources Information Center

    Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith

    2002-01-01

    Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)

  3. On-the-Fly Cross Flow Laser Guided Separation of Aerosol Particles Based on Size, Refractive Index and Density-Theoretical Analysis

    DTIC Science & Technology

    2010-12-20

    Optical chromatography Size determination by eluting particles ,” Talanta 48(3), 551–557 (1999). 15. A. Ashkin, and J. M. Dziedzic, “Optical levitation ...the use of optical force in the gas phase, for example, levitation of airborne particles [15,16], and more recent studies on aerosol optical guiding...On-the-fly cross flow laser guided separation of aerosol particles based on size, refractive index and density–theoretical analysis A. A. Lall

  4. Main rotor free wake geometry effects on blade air loads and response for helicopters in steady maneuvers. Volume 1: Theoretical formulation and analysis of results

    NASA Technical Reports Server (NTRS)

    Sadler, S. G.

    1972-01-01

    A mathematical model and computer program were implemented to study the main rotor free wake geometry effects on helicopter rotor blade air loads and response in steady maneuvers. The theoretical formulation and analysis of results are presented.

  5. Theoretical Characterizaiton of Visual Signatures

    NASA Astrophysics Data System (ADS)

    Kashinski, D. O.; Chase, G. M.; di Nallo, O. E.; Scales, A. N.; Vanderley, D. L.; Byrd, E. F. C.

    2015-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet, and infrared spectra, as well as other properties, of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. Quantum chemistry methods at various levels of sophistication have been employed to optimize molecular geometries, compute unscaled vibrational frequencies, and determine the optical spectra of specific gas-phase species. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). A full statistical analysis and reliability assessment of computational results is currently underway. A comparison of theoretical results to experimental values found in the literature is used to assess any affects of functional choice and basis set on calculation accuracy. The status of this work will be presented at the conference. Work supported by the ARL, DoD HPCMP, and USMA.

  6. The theoretical ultimate magnetoelectric coefficients of magnetoelectric composites by optimization design

    NASA Astrophysics Data System (ADS)

    Wang, H.-L.; Liu, B.

    2014-03-01

    This paper investigates what is the largest magnetoelectric (ME) coefficient of ME composites, and how to realize it. From the standpoint of energy conservation, a theoretical analysis is carried out on an imaginary lever structure consisting of a magnetostrictive phase, a piezoelectric phase, and a rigid lever. This structure is a generalization of various composite layouts for optimization on ME effect. The predicted theoretical ultimate ME coefficient plays a similar role as the efficiency of ideal heat engine in thermodynamics, and is used to evaluate the existing typical ME layouts, such as the parallel sandwiched layout and the serial layout. These two typical layouts exhibit ME coefficient much lower than the theoretical largest values, because in the general analysis the stress amplification ratio and the volume ratio can be optimized independently and freely, but in typical layouts they are dependent or fixed. To overcome this shortcoming and achieve the theoretical largest ME coefficient, a new design is presented. In addition, it is found that the most commonly used electric field ME coefficient can be designed to be infinitely large. We doubt the validity of this coefficient as a reasonable ME effect index and consider three more ME coefficients, namely the electric charge ME coefficient, the voltage ME coefficient, and the static electric energy ME coefficient. We note that the theoretical ultimate value of the static electric energy ME coefficient is finite and might be a more proper measure of ME effect.

  7. The theoretical ultimate magnetoelectric coefficients of magnetoelectric composites by optimization design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.-L.; Liu, B., E-mail: liubin@tsinghua.edu.cn

    2014-03-21

    This paper investigates what is the largest magnetoelectric (ME) coefficient of ME composites, and how to realize it. From the standpoint of energy conservation, a theoretical analysis is carried out on an imaginary lever structure consisting of a magnetostrictive phase, a piezoelectric phase, and a rigid lever. This structure is a generalization of various composite layouts for optimization on ME effect. The predicted theoretical ultimate ME coefficient plays a similar role as the efficiency of ideal heat engine in thermodynamics, and is used to evaluate the existing typical ME layouts, such as the parallel sandwiched layout and the serial layout.more » These two typical layouts exhibit ME coefficient much lower than the theoretical largest values, because in the general analysis the stress amplification ratio and the volume ratio can be optimized independently and freely, but in typical layouts they are dependent or fixed. To overcome this shortcoming and achieve the theoretical largest ME coefficient, a new design is presented. In addition, it is found that the most commonly used electric field ME coefficient can be designed to be infinitely large. We doubt the validity of this coefficient as a reasonable ME effect index and consider three more ME coefficients, namely the electric charge ME coefficient, the voltage ME coefficient, and the static electric energy ME coefficient. We note that the theoretical ultimate value of the static electric energy ME coefficient is finite and might be a more proper measure of ME effect.« less

  8. Split in phase singularities of an optical vortex by off-axis diffraction through a simple circular aperture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taira, Yoshitaka; Zhang, Shukui

    Here, diffraction patterns of an optical vortex through several shaped apertures reveal its topological charge. In this letter, we theoretically and experimentally show that diffraction of a Laguerre Gaussian beam through a circular aperture at an off-axis position can be used to determine the magnitude and sign of the topological charge. To our knowledge, this is the first time that a simple circular aperture has been used to detect orbital angular momentum of an incident optical vortex.

  9. Split in phase singularities of an optical vortex by off-axis diffraction through a simple circular aperture.

    PubMed

    Taira, Yoshitaka; Zhang, Shukui

    2017-04-01

    Diffraction patterns of an optical vortex through several shaped apertures reveal its topological charge. In this Letter, we theoretically and experimentally show that diffraction of a Laguerre Gaussian beam through a circular aperture at an off-axis position can be used to determine the magnitude and sign of the topological charge. To our knowledge, this is the first time that a simple circular aperture has been used to detect orbital angular momentum of an incident optical vortex.

  10. Split in phase singularities of an optical vortex by off-axis diffraction through a simple circular aperture

    DOE PAGES

    Taira, Yoshitaka; Zhang, Shukui

    2017-03-29

    Here, diffraction patterns of an optical vortex through several shaped apertures reveal its topological charge. In this letter, we theoretically and experimentally show that diffraction of a Laguerre Gaussian beam through a circular aperture at an off-axis position can be used to determine the magnitude and sign of the topological charge. To our knowledge, this is the first time that a simple circular aperture has been used to detect orbital angular momentum of an incident optical vortex.

  11. Theoretical investigations of two adamantane derivatives: A combined X-ray, DFT, QTAIM analysis and molecular docking

    NASA Astrophysics Data System (ADS)

    Al-Wahaibi, Lamya H.; Sujay, Subramaniam; Muthu, Gangadharan Ganesh; El-Emam, Ali A.; Venkataramanan, Natarajan S.; Al-Omary, Fatmah A. M.; Ghabbour, Hazem A.; Percino, Judith; Thamotharan, Subbiah

    2018-05-01

    A detailed structural analysis of two adamantane derivatives namely, ethyl 2-[(Z)-1-(adamantan-1-yl)-3-(phenyl)isothioureido]acetate I and ethyl 2-[(Z)-1-(adamantan-1-yl)-3-(4-fluorophenyl)isothioureido]acetate II is carried out to understand the effect of fluorine substitution. The introduction of fluorine atom alters the crystal packing and is completely different from its parent compound. The fluorine substitution drastically reduced the intermolecular H⋯H contacts and this reduction is compensated by intermolecular F⋯H and F⋯F contacts. The relative contributions of various intermolecular contacts present in these structures were quantified using Hirshfeld surface analysis. Energetically significant molecular pairs were identified from the crystal structures of these compounds using PIXEL method. The structures of I and II are optimized in gas and solvent phases using the B3LYP-D3/6-311++G(d,p) level of theory. The quantum theory of atoms-in-molecules (QTAIM) analysis was carried out to estimate the strengths of various intermolecular contacts present in these molecular dimers. The results suggest that the Hsbnd H bonding take part in the stabilization of crystal structures. The experimental and theoretical UV-Vis results show the variations in HOMO and LUMO energy levels. In silico docking analysis indicates that both compounds I and II may exhibit inhibitory activity against 11-β-hydroxysteroid dehydrogenase 1 (11-β-HSD1).

  12. Strategy as simple rules.

    PubMed

    Eisenhardt, K M; Sull, D N

    2001-01-01

    The success of Yahoo!, eBay, Enron, and other companies that have become adept at morphing to meet the demands of changing markets can't be explained using traditional thinking about competitive strategy. These companies have succeeded by pursuing constantly evolving strategies in market spaces that were considered unattractive according to traditional measures. In this article--the third in an HBR series by Kathleen Eisenhardt and Donald Sull on strategy in the new economy--the authors ask, what are the sources of competitive advantage in high-velocity markets? The secret, they say, is strategy as simple rules. The companies know that the greatest opportunities for competitive advantage lie in market confusion, but they recognize the need for a few crucial strategic processes and a few simple rules. In traditional strategy, advantage comes from exploiting resources or stable market positions. In strategy as simple rules, advantage comes from successfully seizing fleeting opportunities. Key strategic processes, such as product innovation, partnering, or spinout creation, place the company where the flow of opportunities is greatest. Simple rules then provide the guidelines within which managers can pursue such opportunities. Simple rules, which grow out of experience, fall into five broad categories: how- to rules, boundary conditions, priority rules, timing rules, and exit rules. Companies with simple-rules strategies must follow the rules religiously and avoid the temptation to change them too frequently. A consistent strategy helps managers sort through opportunities and gain short-term advantage by exploiting the attractive ones. In stable markets, managers rely on complicated strategies built on detailed predictions of the future. But when business is complicated, strategy should be simple.

  13. Learning to Read: Should We Keep Things Simple?

    ERIC Educational Resources Information Center

    Reading Research Quarterly, 2015

    2015-01-01

    The simple view of reading describes reading comprehension as the product of decoding and listening comprehension and the relative contribution of each to reading comprehension across development. We present a cross-sectional analysis of first, second, and third graders (N = 123-125 in each grade) to assess the adequacy of the basic model.…

  14. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses

    ERIC Educational Resources Information Center

    Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis

    2016-01-01

    The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…

  15. Using a Theoretical Framework of Institutional Culture to Analyse an Institutional Strategy Document

    ERIC Educational Resources Information Center

    Jacobs, Anthea Hydi Maxine

    2016-01-01

    This paper builds on a conceptual analysis of institutional culture in higher education. A theoretical framework was proposed to analyse institutional documents of two higher education institutions in the Western Cape, for the period 2002 to 2012 (Jacobs 2012). The elements of this theoretical framework are "shared values and beliefs",…

  16. Adjectives That Aren't: An ERP-Theoretical Analysis of Adjectives in Spanish

    ERIC Educational Resources Information Center

    Bartlett, Laura B.

    2013-01-01

    This thesis investigates the syntactic status of adjectives in Spanish through a crossdisciplinary perspective, incorporating methodologies from both theoretical linguistics and neurolinguistics, specifically, event-related potentials (ERPs). It presents conflicting theories about the syntax of adjectives and explores the ways that the processing…

  17. A simple model of bipartite cooperation for ecological and organizational networks.

    PubMed

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  18. A simple method for automated equilibration detection in molecular simulations

    PubMed Central

    Chodera, John D.

    2016-01-01

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest, in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure, and demonstrate its utility on typical molecular simulation data. PMID:26771390

  19. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  20. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  1. Is BAMM Flawed? Theoretical and Practical Concerns in the Analysis of Multi-Rate Diversification Models.

    PubMed

    Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan

    2017-07-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for

  2. Analysis of simple sequence repeat (SSR) structure and sequence within Epichloë endophyte genomes reveals impacts on gene structure and insights into ancestral hybridization events.

    PubMed

    Clayton, William; Eaton, Carla Jane; Dupont, Pierre-Yves; Gillanders, Tim; Cameron, Nick; Saikia, Sanjay; Scott, Barry

    2017-01-01

    Epichloë grass endophytes comprise a group of filamentous fungi of both sexual and asexual species. Known for the beneficial characteristics they endow upon their grass hosts, the identification of these endophyte species has been of great interest agronomically and scientifically. The use of simple sequence repeat loci and the variation in repeat elements has been used to rapidly identify endophyte species and strains, however, little is known of how the structure of repeat elements changes between species and strains, and where these repeat elements are located in the fungal genome. We report on an in-depth analysis of the structure and genomic location of the simple sequence repeat locus B10, commonly used for Epichloë endophyte species identification. The B10 repeat was found to be located within an exon of a putative bZIP transcription factor, suggesting possible impacts on polypeptide sequence and thus protein function. Analysis of this repeat in the asexual endophyte hybrid Epichloë uncinata revealed that the structure of B10 alleles reflects the ancestral species that hybridized to give rise to this species. Understanding the structure and sequence of these simple sequence repeats provides a useful set of tools for readily distinguishing strains and for gaining insights into the ancestral species that have undergone hybridization events.

  3. [Prevalence of postmenopausal simple ovarian cyst diagnosed by ultrasound].

    PubMed

    Luján Irastorza, Jesús E; Hernández Marín, Imelda; Figueroa Preciado, Gudelia; Ayala, Aquiles R

    2006-10-01

    The high-resolution ultrasound has taken to discover small ovary cysts in postmenopausal asymptomatic women who in another situation would not been detected; these cysts frequently disappear spontaneously and rarely develop cancer; however, they are treated aggressively. To know the prevalence, evolution and treatment of ovary simple cysts in the postmenopausal women in our department, since in our country there are not studies that had analyzed these data. We made a retrospective and descriptive study in the Service of Biology of the Human Reproduction of the Hospital Juarez de Mexico, in a four-year period (2000-2003) that included 1,010 postmenopausal women. The statistical analysis was made using the SPSS software program with which we obtained descriptive measurements in localization, dispersion and by a graphic analysis. We found a simple cysts prevalence of 8.2% (n = 83); the average of age at the diagnosis time was 50.76 years with a standard deviation of 5.55; the cysts diameter was between 0.614 to 12,883 cm with a mean and standard deviation of 2.542 and 1.91 cm respectively; in 27.71% of the cases (n = 23), the cysts disappear spontaneously in the follow up of 3 to 36 month (mean of 14.1). Surgery was indicated in 16.46% (n = 13), by increase in the size of the cyst in 9 patients (11.64%) and by changes in morphology from simple to complex in 4 (4.82%). Tumor like markers were made only to 37 patients (44.57%), which were in normal ranks; no carcinoma was found in this group. The prevalence of ovary simple cysts was similar to the reported in literature. Risk of cancer of these cysts is extremely low when a suitable evaluation is made, a reason why the conservative treatment is suggested when these are simple cysts lesser than 5cm with Ca-125 levels within normal ranks. We recommend a follow up every 3-6 months by Doppler color ultrasound and tumor like markers for five years.

  4. Theoretical Astrophysics at Fermilab

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Theoretical Astrophysics Group works on a broad range of topics ranging from string theory to data analysis in the Sloan Digital Sky Survey. The group is motivated by the belief that a deep understanding of fundamental physics is necessary to explain a wide variety of phenomena in the universe. During the three years 2001-2003 of our previous NASA grant, over 120 papers were written; ten of our postdocs went on to faculty positions; and we hosted or organized many workshops and conferences. Kolb and collaborators focused on the early universe, in particular and models and ramifications of the theory of inflation. They also studied models with extra dimensions, new types of dark matter, and the second order effects of super-horizon perturbations. S tebbins, Frieman, Hui, and Dodelson worked on phenomenological cosmology, extracting cosmological constraints from surveys such as the Sloan Digital Sky Survey. They also worked on theoretical topics such as weak lensing, reionization, and dark energy. This work has proved important to a number of experimental groups [including those at Fermilab] planning future observations. In general, the work of the Theoretical Astrophysics Group has served as a catalyst for experimental projects at Fennilab. An example of this is the Joint Dark Energy Mission. Fennilab is now a member of SNAP, and much of the work done here is by people formerly working on the accelerator. We have created an environment where many of these people made transition from physics to astronomy. We also worked on many other topics related to NASA s focus: cosmic rays, dark matter, the Sunyaev-Zel dovich effect, the galaxy distribution in the universe, and the Lyman alpha forest. The group organized and hosted a number of conferences and workshop over the years covered by the grant. Among them were:

  5. Life cycle assessment of pyrolysis, gasification and incineration waste-to-energy technologies: Theoretical analysis and case study of commercial plants.

    PubMed

    Dong, Jun; Tang, Yuanjun; Nzihou, Ange; Chi, Yong; Weiss-Hortala, Elsa; Ni, Mingjiang

    2018-06-01

    Municipal solid waste (MSW) pyrolysis and gasification are in development, stimulated by a more sustainable waste-to-energy (WtE) option. Since comprehensive comparisons of the existing WtE technologies are fairly rare, this study aims to conduct a life cycle assessment (LCA) using two sets of data: theoretical analysis, and case studies of large-scale commercial plants. Seven systems involving thermal conversion (pyrolysis, gasification, incineration) and energy utilization (steam cycle, gas turbine/combined cycle, internal combustion engine) are modeled. Theoretical analysis results show that pyrolysis and gasification, in particular coupled with a gas turbine/combined cycle, have the potential to lessen the environmental loadings. The benefits derive from an improved energy efficiency leading to less fossil-based energy consumption, and the reduced process emissions by syngas combustion. Comparison among the four operating plants (incineration, pyrolysis, gasification, gasification-melting) confirms a preferable performance of the gasification plant attributed to syngas cleaning. The modern incineration is superior over pyrolysis and gasification-melting at present, due to the effectiveness of modern flue gas cleaning, use of combined heat and power (CHP) cycle, and ash recycling. The sensitivity analysis highlights a crucial role of the plant efficiency and pyrolysis char land utilization. The study indicates that the heterogeneity of MSW and syngas purification technologies are the most relevant impediments for the current pyrolysis/gasification-based WtE. Potential development should incorporate into all process aspects to boost the energy efficiency, improve incoming waste quality, and achieve efficient residues management. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  7. Simple Elasticity Modeling and Failure Prediction for Composite Flexbeams

    NASA Technical Reports Server (NTRS)

    Makeev, Andrew; Armanios, Erian; OBrien, T. Kevin (Technical Monitor)

    2001-01-01

    A simple 2D boundary element analysis, suitable for developing cost effective models for tapered composite laminates, is presented. Constant stress and displacement elements are used. Closed-form fundamental solutions are derived. Numerical results are provided for several configurations to illustrate the accuracy of the model.

  8. Theoretical analysis of nBn infrared photodetectors

    NASA Astrophysics Data System (ADS)

    Ting, David Z.; Soibel, Alexander; Khoshakhlagh, Arezou; Gunapala, Sarath D.

    2017-09-01

    The depletion and surface leakage dark current suppression properties of unipolar barrier device architectures such as the nBn have been highly beneficial for III-V semiconductor-based infrared detectors. Using a one-dimensional drift-diffusion model, we theoretically examine the effects of contact doping, minority carrier lifetime, and absorber doping on the dark current characteristics of nBn detectors to explore some basic aspects of their operation. We found that in a properly designed nBn detector with highly doped excluding contacts the minority carriers are extracted to nonequilibrium levels under reverse bias in the same manner as the high operating temperature (HOT) detector structure. Longer absorber Shockley-Read-Hall (SRH) lifetimes result in lower diffusion and depletion dark currents. Higher absorber doping can also lead to lower diffusion and depletion dark currents, but the benefit should be weighted against the possibility of reduced diffusion length due to shortened SRH lifetime. We also briefly examined nBn structures with unintended minority carrier blocking barriers due to excessive n-doping in the unipolar electron barrier, or due to a positive valence band offset between the barrier and the absorber. Both types of hole blocking structures lead to higher turn-on bias, although barrier n-doping could help suppress depletion dark current.

  9. Electrochemical and theoretical analysis of the reactivity of shikonin derivatives: dissociative electron transfer in esterified compounds.

    PubMed

    Armendáriz-Vidales, Georgina; Frontana, Carlos

    2014-09-07

    An electrochemical and theoretical analysis of a series of shikonin derivatives in aprotic media is presented. Results showed that the first electrochemical reduction signal is a reversible monoelectronic transfer, generating a stable semiquinone intermediate; the corresponding E(I)⁰ values were correlated with calculated values of electroaccepting power (ω(+)) and adiabatic electron affinities (A(Ad)), obtained with BH and HLYP/6-311++G(2d,2p) and considering the solvent effect, revealing the influence of intramolecular hydrogen bonding and the substituting group at position C-2 in the experimental reduction potential. For the second reduction step, esterified compounds isobutyryl and isovalerylshikonin presented a coupled chemical reaction following dianion formation. Analysis of the variation of the dimensionless cathodic peak potential values (ξ(p)) as a function of the scan rate (v) functions and complementary experiments in benzonitrile suggested that this process follows a dissociative electron transfer, in which the rate of heterogeneous electron transfer is slow (~0.2 cm s(-1)), and the rate constant of the chemical process is at least 10(5) larger.

  10. Theoretical analysis of the axial growth of nanowires starting with a binary eutectic droplet via vapor-liquid-solid mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Qing; Li, Hejun; Zhang, Yulei; Zhao, Zhigang

    2018-06-01

    A series of theoretical analysis is carried out for the axial vapor-liquid-solid (VLS) growth of nanowires starting with a binary eutectic droplet. The growth model considering the entire process of axial VLS growth is a development of the approaches already developed by previous studies. In this model, the steady and unsteady state growth are considered both. The amount of solute species in a variable liquid droplet, the nanowire length, radius, growth rate and all other parameters during the entire axial growth process are treated as functions of growth time. The model provides theoretical predictions for the formation of nanowire shape, the length-radius and growth rate-radius dependences. It is also suggested by the model that the initial growth of single nanowire is significantly affected by Gibbs-Thompson effect due to the shape change. The model was applied on predictions of available experimental data of Si and Ge nanowires grown from Au-Si and Au-Ge systems respectively reported by other works. The calculations with the proposed model are in satisfactory agreement with the experimental results of the previous works.

  11. Airtightness the simple(CS) way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, S.

    Builders who might buck against such time consuming air sealing methods as polyethylene wrap and the airtight drywall approach (ADA) may respond better to current strategies. One such method, called SimpleCS, has proven especially effective. SimpleCS, pronounced simplex, stands for simple caulk and seal. A modification of the ADA, SimpleCS is an air-sealing management tool, a simplified systems approach to building tight homes. The system address the crucial question of when and by whom various air sealing steps should be done. It avoids the problems that often occur when later contractors cut open polyethylene wrap to drill holes in themore » drywall. The author describes how SimpleCS works, and the cost and training involved.« less

  12. Adaptation of a Simple Microfluidic Platform for High-Dimensional Quantitative Morphological Analysis of Human Mesenchymal Stromal Cells on Polystyrene-Based Substrates.

    PubMed

    Lam, Johnny; Marklein, Ross A; Jimenez-Torres, Jose A; Beebe, David J; Bauer, Steven R; Sung, Kyung E

    2017-12-01

    Multipotent stromal cells (MSCs, often called mesenchymal stem cells) have garnered significant attention within the field of regenerative medicine because of their purported ability to differentiate down musculoskeletal lineages. Given the inherent heterogeneity of MSC populations, recent studies have suggested that cell morphology may be indicative of MSC differentiation potential. Toward improving current methods and developing simple yet effective approaches for the morphological evaluation of MSCs, we combined passive pumping microfluidic technology with high-dimensional morphological characterization to produce robust tools for standardized high-throughput analysis. Using ultraviolet (UV) light as a modality for reproducible polystyrene substrate modification, we show that MSCs seeded on microfluidic straight channel devices incorporating UV-exposed substrates exhibited morphological changes that responded accordingly to the degree of substrate modification. Substrate modification also effected greater morphological changes in MSCs seeded at a lower rather than higher density within microfluidic channels. Despite largely comparable trends in morphology, MSCs seeded in microscale as opposed to traditional macroscale platforms displayed much higher sensitivity to changes in substrate properties. In summary, we adapted and qualified microfluidic cell culture platforms comprising simple straight channel arrays as a viable and robust tool for high-throughput quantitative morphological analysis to study cell-material interactions.

  13. Cumulative culture in the laboratory: methodological and theoretical challenges.

    PubMed

    Miton, Helena; Charbonneau, Mathieu

    2018-05-30

    In the last decade, cultural transmission experiments (transmission chains, replacement, closed groups and seeded groups) have become important experimental tools in investigating cultural evolution. However, these methods face important challenges, especially regarding the operationalization of theoretical claims. In this review, we focus on the study of cumulative cultural evolution, the process by which traditions are gradually modified and, for technological traditions in particular, improved upon over time. We identify several mismatches between theoretical definitions of cumulative culture and their implementation in cultural transmission experiments. We argue that observed performance increase can be the result of participants learning faster in a group context rather than effectively leading to a cumulative effect. We also show that in laboratory experiments, participants are asked to complete quite simple tasks, which can undermine the evidential value of the diagnostic criterion traditionally used for cumulative culture (i.e. that cumulative culture is a process that produces solutions that no single individual could have invented on their own). We show that the use of unidimensional metrics of cumulativeness drastically curtail the variation that may be observed, which raises specific issues in the interpretation of the experimental evidence. We suggest several solutions to these mismatches (learning times, task complexity and variation) and develop the use of design spaces in experimentally investigating old and new questions about cumulative culture. © 2018 The Author(s).

  14. The credibility of exposure therapy: Does the theoretical rationale matter?

    PubMed

    Arch, Joanna J; Twohig, Michael P; Deacon, Brett J; Landy, Lauren N; Bluett, Ellen J

    2015-09-01

    Little is understood about how the public perceives exposure-based therapy (ET) for treating anxiety and trauma-related disorders or how ET rationales affect treatment credibility. Distinct approaches to framing ET are practiced, including those emphasized in traditional cognitive behavioral therapy, acceptance and commitment therapy, and the more recent inhibitory learning model. However, their relative effect on ET's credibility remains unknown. A final sample of 964 U.S. adults provided baseline views of ET. Participants rated ET treatment credibility following a simple ET definition (pre-rationale) and following randomization to rationale modules addressing ET goals, fear, and cognitive strategies from distinct theoretical perspectives (post-rationale). Baseline ET views, symptoms, and sociodemographic characteristics were examined as putative moderators and predictors. At baseline, the majority had never heard of ET. From pre- to post-rationale, ET treatment credibility significantly increased but the rationales' theoretical perspective had little impact. More negative baseline ET views, specific ethnic/racial minority group status, and lower education moderated or predicted greater increases in treatment credibility following the rationale. ET remains relatively unknown as a treatment for anxiety or trauma, supporting the need for direct-to-consumer marketing. Diverse theory-driven rationales similarly increased ET credibility, particularly among those less likely to use ET. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Game Theoretical Analysis on Cooperation Stability and Incentive Effectiveness in Community Networks.

    PubMed

    Song, Kaida; Wang, Rui; Liu, Yi; Qian, Depei; Zhang, Han; Cai, Jihong

    2015-01-01

    Community networks, the distinguishing feature of which is membership admittance, appear on P2P networks, social networks, and conventional Web networks. Joining the network costs money, time or network bandwidth, but the individuals get access to special resources owned by the community in return. The prosperity and stability of the community are determined by both the policy of admittance and the attraction of the privileges gained by joining. However, some misbehaving users can get the dedicated resources with some illicit and low-cost approaches, which introduce instability into the community, a phenomenon that will destroy the membership policy. In this paper, we analyze on the stability using game theory on such a phenomenon. We propose a game-theoretical model of stability analysis in community networks and provide conditions for a stable community. We then extend the model to analyze the effectiveness of different incentive policies, which could be used when the community cannot maintain its members in certain situations. Then we verify those models through a simulation. Finally, we discuss several ways to promote community network's stability by adjusting the network's properties and give some proposal on the designs of these types of networks from the points of game theory and stability.

  16. Simple, Internally Adjustable Valve

    NASA Technical Reports Server (NTRS)

    Burley, Richard K.

    1990-01-01

    Valve containing simple in-line, adjustable, flow-control orifice made from ordinary plumbing fitting and two allen setscrews. Construction of valve requires only simple drilling, tapping, and grinding. Orifice installed in existing fitting, avoiding changes in rest of plumbing.

  17. Historical and Theoretical Perspectives on Appalachia's Economic Dependency.

    ERIC Educational Resources Information Center

    Salstrom, Paul

    The roots of Appalachia's economic dependency go back to the region's first settlers in the 1730s. Historical and theoretical analysis of this phenomenon is useful in understanding the current status of the area, including, the status of education. The early settler sought a "competency"--enough productive property to support a family.…

  18. A game-theoretical approach to multimedia social networks security.

    PubMed

    Liu, Enqiang; Liu, Zengliang; Shao, Fei; Zhang, Zhiyong

    2014-01-01

    The contents access and sharing in multimedia social networks (MSNs) mainly rely on access control models and mechanisms. Simple adoptions of security policies in the traditional access control model cannot effectively establish a trust relationship among parties. This paper proposed a novel two-party trust architecture (TPTA) to apply in a generic MSN scenario. According to the architecture, security policies are adopted through game-theoretic analyses and decisions. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester. By the game method for the combination of security policies utility and its influences on each party's benefits, the Nash equilibrium is achieved, that is, an optimal and stable combination of security policies, to establish and enhance trust among stakeholders.

  19. A Game-Theoretical Approach to Multimedia Social Networks Security

    PubMed Central

    Liu, Enqiang; Liu, Zengliang; Shao, Fei; Zhang, Zhiyong

    2014-01-01

    The contents access and sharing in multimedia social networks (MSNs) mainly rely on access control models and mechanisms. Simple adoptions of security policies in the traditional access control model cannot effectively establish a trust relationship among parties. This paper proposed a novel two-party trust architecture (TPTA) to apply in a generic MSN scenario. According to the architecture, security policies are adopted through game-theoretic analyses and decisions. Based on formalized utilities of security policies and security rules, the choice of security policies in content access is described as a game between the content provider and the content requester. By the game method for the combination of security policies utility and its influences on each party's benefits, the Nash equilibrium is achieved, that is, an optimal and stable combination of security policies, to establish and enhance trust among stakeholders. PMID:24977226

  20. Experimental and theoretical characterization of an AC electroosmotic micromixer.

    PubMed

    Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo

    2010-01-01

    We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.