Science.gov

Sample records for simple theoretical analysis

  1. Two 27 MHz Simple Inductive Loops, as Hyperthermia Treatment Applicators: Theoretical Analysis and Development

    PubMed Central

    Kouloulias, Vassilis; Karanasiou, Irene; Koutsoupidou, Maria; Matsopoulos, George; Kouvaris, John; Uzunoglu, Nikolaos

    2015-01-01

    Background. Deep heating is still the main subject for research in hyperthermia treatment. Aim. The purpose of this study was to develop and analyze a simple loop as a heating applicator. Methods. The performance of two 27 MHz inductive loop antennas as potential applicators in hyperthermia treatment was studied theoretically as well as experimentally in phantoms. Two inductive loop antennas with radii 7 cm and 9 cm were designed, simulated, and constructed. The theoretical analysis was performed by using Green's function and Bessel's function technique. Experiments were performed with phantoms radiated by the aforementioned loop antennas. Results. The specific absorption rate (SAR) distributions were estimated from the respective local phantom temperature measurements. Comparisons of the theoretical, simulation, and experimental studies showed satisfying agreement. The penetration depth was measured theoretically and experimentally in the range of 2–3.5 cm. Conclusion. The theoretical and experimental analysis showed that current loops are efficient in the case where the peripheral heating of spherical tumor formation located at 2–3.5 cm depth is required. PMID:26649070

  2. A theoretical analysis of steady-state photocurrents in simple silicon diodes

    NASA Technical Reports Server (NTRS)

    Edmonds, L.

    1995-01-01

    A theoretical analysis solves for the steady-state photocurrents produced by a given photo-generation rate function with negligible recombination in simple silicon diodes, consisting of a uniformly doped quasi-neutral region (called 'substrate' below) adjacent to a p-n junction depletion region (DR). Special attention is given to conditions that produce 'funneling' (a term used by the single-eventeffects community) under steady-state conditions. Funneling occurs when carriers are generated so fast that the DR becomes flooded and partially or completely collapses. Some or nearly all of the applied voltage, plus built-in potential normally across the DR, is now across the substrate. This substrate voltage drop affects substrate currents. The steady-state problem can provide some qualitative insights into the more difficult transient problem. First, it was found that funneling can be induced from a distance, i.e., from carriers generated at locations outside of the DR. Secondly, it was found that the substrate can divide into two subregions, with one controlling substrate resistance and the other characterized by ambipolar diffusion. Finally, funneling was found to be more difficult to induce in the p(sup +)/n diode than in the n(sup +)/p diode. The carrier density exceeding the doping density in the substrate and at the DR boundary is not a sufficient condition to collapse a DR.

  3. Theoretical and computational analysis of the quantum radar cross section for simple geometrical targets

    NASA Astrophysics Data System (ADS)

    Brandsema, Matthew J.; Narayanan, Ram M.; Lanzagorta, Marco

    2017-01-01

    The concept of the quantum radar cross section (QRCS) has generated interest due to its promising feature of enhanced side lobe target visibility in comparison to the classical radar cross section. Researchers have simulated the QRCS for very limited geometries and even developed approximations to reduce the computational complexity of the simulations. This paper develops an alternate theoretical framework for calculating the QRCS. This new framework yields an alternative form of the QRCS expression in terms of Fourier transforms. This formulation is much easier to work with mathematically and allows one to derive analytical solutions for various geometries, which provides an explanation for the aforementioned sidelobe advantage. We also verify the resulting equations by comparing with numerical simulations, as well as provide an error analysis of these simulations to ensure the accuracy of the results. Comparison of our simulation results with the analytical solutions reveal that they agree with one another extremely well.

  4. Simple theoretical models for composite rotor blades

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  5. Theoretical and natural strain patterns in ductile simple shear zones

    NASA Astrophysics Data System (ADS)

    Ingles, Jacques

    1985-06-01

    A simple empirical model representing the variation of shear strain throughout a simple shear zone allows us to determine the evolution of finite strain as well as the progressive shape changes of passive markers. Theoretical strain patterns (intensity and orientation of finite strain trajectories, deformed shapes of initially planar, equidimensional and non-equidimensional passive markers) compare remarkably well with patterns observed in natural and experimental zones of ductile simple shear (intensity and orientation of schistosity, shape changes of markers, foliation developed by deformation of markers). The deformed shapes of initially equidimensional and non-equidimensional passive markers is controlled by a coefficient P, the product of (1) the ratio between marker size and shear zone thickness (2) the shear gradient across the zone. For small values of P (approximately P < 2), the original markers change nearly into ellipses, while large values of P lead to " retort" shaped markers. This theoretical study also allows us to predict, throughout a simple shear zone, various relationships between the principal finite strain trajectory, planar passive markers and foliations developed by deformation of initially equidimensional passive markers.

  6. The relationship between local liquid density and force applied on a tip of atomic force microscope: A theoretical analysis for simple liquids

    NASA Astrophysics Data System (ADS)

    Amano, Ken-ichi; Suzuki, Kazuhiro; Fukuma, Takeshi; Takahashi, Ohgi; Onishi, Hiroshi

    2013-12-01

    The density of a liquid is not uniform when placed on a solid. The structured liquid pushes or pulls a probe employed in atomic force microscopy, as demonstrated in a number of experimental studies. In the present study, the relation between the force on a probe and the local density of a liquid is derived based on the statistical mechanics of simple liquids. When the probe is identical to a solvent molecule, the strength of the force is shown to be proportional to the vertical gradient of ln(ρDS) with the local liquid's density on a solid surface being ρDS. The intrinsic liquid's density on a solid is numerically calculated and compared with the density reconstructed from the force on a probe that is identical or not identical to the solvent molecule.

  7. The relationship between local liquid density and force applied on a tip of atomic force microscope: A theoretical analysis for simple liquids

    SciTech Connect

    Amano, Ken-ichi Takahashi, Ohgi; Suzuki, Kazuhiro; Fukuma, Takeshi; Onishi, Hiroshi

    2013-12-14

    The density of a liquid is not uniform when placed on a solid. The structured liquid pushes or pulls a probe employed in atomic force microscopy, as demonstrated in a number of experimental studies. In the present study, the relation between the force on a probe and the local density of a liquid is derived based on the statistical mechanics of simple liquids. When the probe is identical to a solvent molecule, the strength of the force is shown to be proportional to the vertical gradient of ln(ρ{sub DS}) with the local liquid's density on a solid surface being ρ{sub DS}. The intrinsic liquid's density on a solid is numerically calculated and compared with the density reconstructed from the force on a probe that is identical or not identical to the solvent molecule.

  8. A Simple Plant Growth Analysis.

    ERIC Educational Resources Information Center

    Oxlade, E.

    1985-01-01

    Describes the analysis of dandelion peduncle growth based on peduncle length, epidermal cell dimensions, and fresh/dry mass. Methods are simple and require no special apparatus or materials. Suggests that limited practical work in this area may contribute to students' lack of knowledge on plant growth. (Author/DH)

  9. Analysis of Simple Neural Networks

    DTIC Science & Technology

    1988-12-20

    ANALYSIS OF SThlPLE NEURAL NETWORKS Chedsada Chinrungrueng Master’s Report Under the Supervision of Prof. Carlo H. Sequin Department of... Neural Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...and guidJ.nce. I have learned a great deal from his teaching, knowledge, and criti- cism. 1. MOTIVATION ANALYSIS OF SIMPLE NEURAL NETWORKS Chedsada

  10. Simple theoretical model for ion cooperativity in aqueous solutions of simple inorganic salts and its effect on water surface tension.

    PubMed

    Gao, Yi Qin

    2011-11-03

    Careful analysis of experimental data showed that the salt aqueous solution/air surface tension depends on a rather complicated manner of salt composition and points to the importance of ion cooperativity. In this short article, we include the selective binding of anions over cations at interfaces (as revealed from molecular dynamics simulations, spectroscopic measurements, and Record's analysis of the surface tension data) and the anion-cation association (based on the observation of matching water affinity) in a simple theoretical model to understand salt effects on surface tension. The introduction of the surface effect and ion association provides a qualitative explanation of the experimental data, in particular, the strong anion dependence of the cations' rank according to their ability of increasing water surface tension. We hope that the physical insight provided by this study can be used to point to new directions for more detailed studies.

  11. Theoretical Aspect of Low Pressure Discharges in Simple Gasses

    DTIC Science & Technology

    1994-03-28

    0+, 0- ,02-, 03, 03+, 03-, electronically excited oxygen, electrons, and possibly clusters . The number possible of reaction channels is huge, and one...which are not simple chemically to begin with, and then which decay into many species of ions and free radicals, this information is not always easy to...radiative decays . Another reaction path is for the atom to recombine into a highly excited atomic state and then radiatively decay to the ground state

  12. Evaluation of a Simple Theoretical Expression for Hadley Cell Width

    NASA Astrophysics Data System (ADS)

    Rees, K.; Garrett, T. J.; Reichler, T.; Staten, P. W.

    2016-12-01

    The latitudinal width of Earth's Hadley cell has a simple expression that is a function of the planetary radius, and the atmosphere's angular velocity, stability, and depth: the cell width varies as the square root of the Brunt-Vaisala frequency and the tropospheric depth such that a deep, stable atmosphere extends further poleward. There are multiple avenues to this expression based on either the dynamics (Schneider (1977); Held (2000)) or the statistical mechanics (Garrett et al (2016)) of the cell. Here, we contrast these approaches and test the expression for application to other planetary bodies within the solar system. In general, we find the expression accounts for the wide range of cell widths, ranging from very narrow (Jupiter) to very wide (Titan). However, there is a choice of atmospheric depth to be considered in the calculation, and we find that the effective depth that leads to best agreement with observed cell widths is closest to the tropospheric density scale height rather than the depth of the troposphere itself, as has commonly been assumed. There remain significant discrepancies that require explanation, however. For example, the effective depth for Earth is 11.3 km rather than 8.5 km and Uranus has an effective depth of 20 km compared to its tropospheric scale height of 27.7 km. We discuss possible explanations for these differences.

  13. A theoretical model of sheath fold morphology in simple shear

    NASA Astrophysics Data System (ADS)

    Reber, Jacqueline E.; Dabrowski, Marcin; Galland, Olivier; Schmid, Daniel W.

    2013-04-01

    Sheath folds are highly non-cylindrical structures often associated with shear zones. The geometry of sheath folds, especially cross-sections perpendicular to the stretching direction that display eye-patterns, have been used in the field to deduce kinematic information such as shear sense and bulk strain type. However, how sheath folds form and how they evolve with increasing strain is still a matter of debate. We investigate the formation of sheath folds around a weak inclusion acting as a slip surface in simple shear by means of an analytical model. We systematically vary the slip surface orientation and shape and evaluate the impact on the evolving eye-pattern. In addition we compare our results to existing classifications. Based on field observations it has been suggested that the shear sense of a shear zone can be determined by knowing the position of the center of an eye-pattern and the closing direction of the corresponding sheath fold. In our modeled sheath folds we can observe for a given strain that the center of the eye-structure is subject to change in height with respect to the upper edge of the outermost closed contour for different cross-sections perpendicular to the shear direction. This results in a large variability in layer thickness, questioning the usefulness of sheath folds as shear sense indicators. The location of the center of the eye structure, however, is largely invariant to the initial configurations of the slip surface as well as to strain. It has been suggested that the ratio of the aspect ratio of the innermost and outermost closed contour in eye-patterns could be linked to the bulk strain type based on filed observations. We apply this classification to our modeled sheath folds and we observe that the values of the aspect ratios of the closed contours within the eye-pattern are dependent on the strain and the cross-section location. The ratio (R') of the aspect ratios of the outermost closed contour (Ryz) and the innermost closed

  14. Simple Numerical Analysis of Longboard Speedometer Data

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2013-01-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

  15. Simple Numerical Analysis of Longboard Speedometer Data

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2013-01-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

  16. Simple numerical analysis of longboard speedometer data

    NASA Astrophysics Data System (ADS)

    Hare, Jonathan

    2013-11-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 Phys. Educ. 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as simple numerical differentiation and integration. This is an interesting, fun and instructive way to start to explore data manipulation at GCSE and A-level—analysis and skills so essential for the engineer and scientist.

  17. Work Domain Analysis: Theoretical Concepts and Methodology

    DTIC Science & Technology

    2005-02-01

    method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human Factors, 40, 254-276. Itoh, J., Sakuma, A...Work Domain Analysis: Theoretical Concepts and Methodology Neelam Naikar, Robyn Hopcroft, and Anna Moylan Air Operations...theoretical and methodological approach for work domain analysis (WDA), the first phase of cognitive work analysis. The report: (1) addresses a number of

  18. Simple control-theoretic models of human steering activity in visually guided vehicle control

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1991-01-01

    A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.

  19. Dental Photothermal Radiometry: Theoretical Analysis.

    NASA Astrophysics Data System (ADS)

    Matvienko, Anna; Jeon, Raymond; Mandelis, Andreas; Abrams, Stephen

    2007-03-01

    Dental enamel demineralization in its early stages is very difficult to detect with conventional x-rays or visual examination. High-resolution techniques, such as scanning electron microscopy, usually require destruction of the tooth. Photothermal Radiomety (PTR) was recently applied as a safe, non-destructive, and highly sensitive tool for the detection of early dental demineralization, artificially created on the enamel surface. The experiments showed very high sensitivity of the measured signal to incipient changes in the surface structure, emphasizing the clinical capabilities of the method. In order to analyze the biothermophotonic phenomena in a tooth sample during the photothermal excitation, a theoretical model featuring coupled diffuse-photon-density-wave and thermal-wave fields was developed. Numerical simulations identified the effects on the PTR signal of changes in optical and thermal properties of enamel and dentin as a result of demineralization. The model predictions and experimental results will be compared and discussed.

  20. Theoretical analysis of ARC constriction

    SciTech Connect

    Stoenescu, M.L.; Brooks, A.W.; Smith, T.M.

    1980-12-01

    The physics of the thermionic converter is governed by strong electrode-plasma interactions (emissions surface scattering, charge exchange) and weak interactions (diffusion, radiation) at the maximum interelectrode plasma radius. The physical processes are thus mostly convective in thin sheaths in front of the electrodes and mostly diffusive and radiative in the plasma bulk. The physical boundaries are open boundaries to particle transfer (electrons emitted or absorbed by the electrodes, all particles diffusing through some maximum plasma radius) and to convective, conductive and radiative heat transfer. In a first approximation the thermionic converter may be described by a one-dimensional classical transport theory. The two-dimensional effects may be significant as a result of the sheath sensitivity to radial plasma variations and of the strong sheath-plasma coupling. The current-voltage characteristic of the converter is thus the result of an integrated current density over the collector area for which the boundary conditions at each r determine the regime (ignited/unignited) of the local current density. A current redistribution strongly weighted at small radii (arc constriction) limits the converter performance and opens questions on constriction reduction possibilities. The questions addressed are the followng: (1) what are the main contributors to the loss of current at high voltage in the thermionic converter; and (2) is arc constriction observable theoretically and what are the conditions of its occurrence. The resulting theoretical problem is formulated and results are given. The converter electrical current is estimated directly from the electron and ion particle fluxes based on the spatial distribution of the electron/ion density n, temperatures T/sub e/, T/sub i/, electrical voltage V and on the knowledge of the transport coefficients. (WHK)

  1. Assessment of the tautomeric population of benzimidazole derivatives in solution: a simple and versatile theoretical-experimental approach

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos; Llovera, Ligia; Echevarria, Lorenzo; Hernández, Florencio E.

    2015-02-01

    Herein, we present a simple and versatile theoretical-experimental approach to assess the tautomeric distribution on 5(6)-aminobenzimidazole (5(6)-ABZ) derivatives in solution via one-photon absorption. The method is based on the optimized weighted sum of the theoretical spectra of the corresponding tautomers. In this article we show how the choice of exchange-correlation functional (XCF) employed in the calculations becomes crucial for the success of the approach. After the systematic analysis of XCFs with different amounts of exact-exchange we found a better performance for B3LYP and PBE0. The direct test of the proposed method on omeprazole, a well-known 5(6)-benzimidazole based pharmacotherapeutic, demonstrate its broader applicability. The proposed approach is expected to find direct applications on the tautomeric analysis of other molecular systems exhibiting similar tautomeric equilibria.

  2. Assessment of the tautomeric population of benzimidazole derivatives in solution: a simple and versatile theoretical-experimental approach.

    PubMed

    Diaz, Carlos; Llovera, Ligia; Echevarria, Lorenzo; Hernández, Florencio E

    2015-02-01

    Herein, we present a simple and versatile theoretical-experimental approach to assess the tautomeric distribution on 5(6)-aminobenzimidazole (5(6)-ABZ) derivatives in solution via one-photon absorption. The method is based on the optimized weighted sum of the theoretical spectra of the corresponding tautomers. In this article we show how the choice of exchange-correlation functional (XCF) employed in the calculations becomes crucial for the success of the approach. After the systematic analysis of XCFs with different amounts of exact-exchange we found a better performance for B3LYP and PBE0. The direct test of the proposed method on omeprazole, a well-known 5(6)-benzimidazole based pharmacotherapeutic, demonstrate its broader applicability. The proposed approach is expected to find direct applications on the tautomeric analysis of other molecular systems exhibiting similar tautomeric equilibria.

  3. MURR nodal analysis with simple interactive simulation

    NASA Astrophysics Data System (ADS)

    Enani, Mohammad Abdulsamad

    The main goal of this research is to design and produce computer codes that should do a NODAL analysis of the core of Missouri University Research Reactor 'MURR' with a simple neutron transient simulation. These codes should be executed on any of the family of the widely used modern IBM/PC (or IBM/PS) microcomputers (or compatibles). The nodal analysis code should find the power (or flux) distribution inside the reactor core and calculate fuel burnup for each of the fuel elements by using the nodal analysis technique described in chapter 3. The simulator code is a relatively simple, educational aid of MURR reactor kinetics simulation that uses one group point reactor model.

  4. Pure shear and simple shear calcite textures. Comparison of experimental, theoretical and natural data

    USGS Publications Warehouse

    Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.

    1987-01-01

    The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.

  5. Generation of limited-diffraction wave by approximating theoretical X-wave with simple driving

    NASA Astrophysics Data System (ADS)

    Li, Yaqin; Ding, MingYue; Hua, Shaoyan; Ming, Yuchi

    2012-03-01

    X-wave is a particular case of limited diffracting waves which has great potential applications in the enlargement of the field depth in acoustic imaging systems. In practice, the generation of real time X-wave ultrasonic fields is a complex technology which involves precise and specific voltage for the excitations for each distinct array element. In order to simplify the X-wave generating process, L. Castellanos proposed an approach to approximate the X-wave excitations with rectangular pulses. The results suggested the possibility of achieving limited-diffraction waves with relatively simple driving waveforms, which could be implemented with a moderate cost in analogical electronics. In this work, we attempt to improve L. Castellanos's method by calculating the approximation driving pulse not only from rectangular but also triangular driving pulse. The differences between theoretical X-wave signals and driving pulses, related to their excitation effects, are minimized by L2 curve criterion. The driving pulses with the minimal optimization result we chosen. A tradeoff is obtained between the cost of implementation of classical 0-order X-wave and the precision of approximation with the simple pulsed electrical driving. The good agreement of the driving pulse and the result resulting field distributions, with those obtained from the classical X-wave excitations can be justified by the filtering effects induced by the transducer elements in frequency domain. From the simulation results, we can see that the new approach improve the precise of the approximation, the difference between theoretical X-wave and the new approach is lower 10 percent than the difference between theoretical X-wave and rectangular as the driving pulse in simulation.

  6. A theoretical model of the relationship between the h-index and other simple citation indicators.

    PubMed

    Bertoli-Barsotti, Lucio; Lando, Tommaso

    2017-01-01

    Of the existing theoretical formulas for the h-index, those recently suggested by Burrell (J Informetr 7:774-783, 2013b) and by Bertoli-Barsotti and Lando (J Informetr 9(4):762-776, 2015) have proved very effective in estimating the actual value of the h-index Hirsch (Proc Natl Acad Sci USA 102:16569-16572, 2005), at least at the level of the individual scientist. These approaches lead (or may lead) to two slightly different formulas, being based, respectively, on a "standard" and a "shifted" version of the geometric distribution. In this paper, we review the genesis of these two formulas-which we shall call the "basic" and "improved" Lambert-W formula for the h-index-and compare their effectiveness with that of a number of instances taken from the well-known Glänzel-Schubert class of models for the h-index (based, instead, on a Paretian model) by means of an empirical study. All the formulas considered in the comparison are "ready-to-use", i.e., functions of simple citation indicators such as: the total number of publications; the total number of citations; the total number of cited paper; the number of citations of the most cited paper. The empirical study is based on citation data obtained from two different sets of journals belonging to two different scientific fields: more specifically, 231 journals from the area of "Statistics and Mathematical Methods" and 100 journals from the area of "Economics, Econometrics and Finance", totaling almost 100,000 and 20,000 publications, respectively. The citation data refer to different publication/citation time windows, different types of "citable" documents, and alternative approaches to the analysis of the citation process ("prospective" and "retrospective"). We conclude that, especially in its improved version, the Lambert-W formula for the h-index provides a quite robust and effective ready-to-use rule that should be preferred to other known formulas if one's goal is (simply) to derive a reliable estimate of the h-index.

  7. Theoretical Analysis of Canadian Lifelong Education Development

    ERIC Educational Resources Information Center

    Mukan, Natalia; Barabash, Olena; Busko, Maria

    2014-01-01

    In the article, the problem of Canadian lifelong education development has been studied. The main objectives of the article are defined as theoretical analysis of scientific and pedagogical literature which highlights different aspects of the research problem; periods of lifelong education development; and determination of lifelong learning role…

  8. Theoretical Analysis of Canadian Lifelong Education Development

    ERIC Educational Resources Information Center

    Mukan, Natalia; Barabash, Olena; Busko, Maria

    2014-01-01

    In the article, the problem of Canadian lifelong education development has been studied. The main objectives of the article are defined as theoretical analysis of scientific and pedagogical literature which highlights different aspects of the research problem; periods of lifelong education development; and determination of lifelong learning role…

  9. A simple theoretical framework for understanding heterogeneous differentiation of CD4+ T cells

    PubMed Central

    2012-01-01

    Background CD4+ T cells have several subsets of functional phenotypes, which play critical yet diverse roles in the immune system. Pathogen-driven differentiation of these subsets of cells is often heterogeneous in terms of the induced phenotypic diversity. In vitro recapitulation of heterogeneous differentiation under homogeneous experimental conditions indicates some highly regulated mechanisms by which multiple phenotypes of CD4+ T cells can be generated from a single population of naïve CD4+ T cells. Therefore, conceptual understanding of induced heterogeneous differentiation will shed light on the mechanisms controlling the response of populations of CD4+ T cells under physiological conditions. Results We present a simple theoretical framework to show how heterogeneous differentiation in a two-master-regulator paradigm can be governed by a signaling network motif common to all subsets of CD4+ T cells. With this motif, a population of naïve CD4+ T cells can integrate the signals from their environment to generate a functionally diverse population with robust commitment of individual cells. Notably, two positive feedback loops in this network motif govern three bistable switches, which in turn, give rise to three types of heterogeneous differentiated states, depending upon particular combinations of input signals. We provide three prototype models illustrating how to use this framework to explain experimental observations and make specific testable predictions. Conclusions The process in which several types of T helper cells are generated simultaneously to mount complex immune responses upon pathogenic challenges can be highly regulated, and a simple signaling network motif can be responsible for generating all possible types of heterogeneous populations with respect to a pair of master regulators controlling CD4+ T cell differentiation. The framework provides a mathematical basis for understanding the decision-making mechanisms of CD4+ T cells, and it can be

  10. A simple theoretical model for erbium doped PCF ring lasers design

    NASA Astrophysics Data System (ADS)

    Sánchez-Martín, J. A.; Álvarez, J. M.; Rebolledo, M. A.; Andrés, M. V.; Vallés, J. A.; Martín, J. C.; Berdejo, V.; Díez, A.

    2011-09-01

    In this paper a simple theoretical model is presented where the energy conservation principle is used. The model is based on semi-analytical equations describing the behaviour of an erbium-doped photonic crystal fibre (PCF) inside a ring laser. These semi-analytical equations allow the characterisation of the erbium-doped PCF. Spectral absorption and emission coefficients can be determined through the measurement of the gain in the PCF as a function of pump power attenuation for several fibre lengths by means of a linear fitting. These coefficients are proportional to the erbium concentration and to the corresponding absorption or emission cross section. So if the concentration is known the erbium cross sections can be immediately determined. The model was successfully checked by means of two different home-made erbium doped PCFs. Once the fibres were characterised the values of the spectral absorption and emission coefficients were used to simulate the behaviour of a back propagating ring laser made of each fibre. Passive losses of the components in the cavity were previously calibrated. A good agreement was found between simulated and experimental values of efficiency, pump power threshold and output laser power for a wide set of experimental situations (several values of the input pump power, output coupling factor, laser wavelength and fibre length).

  11. A simple white noise analysis of neuronal light responses.

    PubMed

    Chichilnisky, E J

    2001-05-01

    A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.

  12. Theoretical analysis of intracortical microelectrode recordings

    NASA Astrophysics Data System (ADS)

    Lempka, Scott F.; Johnson, Matthew D.; Moffitt, Michael A.; Otto, Kevin J.; Kipke, Daryl R.; McIntyre, Cameron C.

    2011-08-01

    Advanced fabrication techniques have now made it possible to produce microelectrode arrays for recording the electrical activity of a large number of neurons in the intact brain for both clinical and basic science applications. However, the long-term recording performance desired for these applications is hindered by a number of factors that lead to device failure or a poor signal-to-noise ratio (SNR). The goal of this study was to identify factors that can affect recording quality using theoretical analysis of intracortical microelectrode recordings of single-unit activity. Extracellular microelectrode recordings were simulated with a detailed multi-compartment cable model of a pyramidal neuron coupled to a finite-element volume conductor head model containing an implanted recording microelectrode. Recording noise sources were also incorporated into the overall modeling infrastructure. The analyses of this study would be very difficult to perform experimentally; however, our model-based approach enabled a systematic investigation of the effects of a large number of variables on recording quality. Our results demonstrate that recording amplitude and noise are relatively independent of microelectrode size, but instead are primarily affected by the selected recording bandwidth, impedance of the electrode-tissue interface and the density and firing rates of neurons surrounding the recording electrode. This study provides the theoretical groundwork that allows for the design of the microelectrode and recording electronics such that the SNR is maximized. Such advances could help enable the long-term functionality required for chronic neural recording applications.

  13. Theoretical analysis of intracortical microelectrode recordings

    PubMed Central

    Lempka, Scott F; Johnson, Matthew D; Moffitt, Michael A; Otto, Kevin J; Kipke, Daryl R; McIntyre, Cameron C

    2011-01-01

    Advanced fabrication techniques have now made it possible to produce microelectrode arrays for recording the electrical activity of a large number of neurons in the intact brain for both clinical and basic science applications. However, the long-term recording performance desired for these applications is hindered by a number of factors that lead to device failure or a poor signal-to-noise ratio (SNR). The goal of this study was to identify factors that can affect recording quality using theoretical analysis of intracortical microelectrode recordings of single-unit activity. Extracellular microelectrode recordings were simulated with a detailed multi-compartment cable model of a pyramidal neuron coupled to a finite element volume conductor head model containing an implanted recording microelectrode. Recording noise sources were also incorporated into the overall modeling infrastructure. The analyses of this study would be very difficult to perform experimentally; however, our model-based approach enabled a systematic investigation of the effects of a large number of variables on recording quality. Our results demonstrate that recording amplitude and noise are relatively independent of microelectrode size, but instead are primarily affected by the selected recording bandwidth, impedance of the electrode-tissue interface, and the density and firing rates of neurons surrounding the recording electrode. This study provides the theoretical groundwork that allows for the design of the microelectrode and recording electronics such that the SNR is maximized. Such advances could help enable the long-term functionality required for chronic neural recording applications. PMID:21775783

  14. Second-Order Theoretical Analysis: A Method for Constructing Theoretical Explanation

    ERIC Educational Resources Information Center

    Shkedi, Asher

    2004-01-01

    In this paper a model is offered that allows for the construction of a theoretical explanation on the basis of data accumulated in the field for the purpose of constructing a meaningful description. In this endeavour a distinction is proposed between two methods of theoretical analysis: first-order analysis and second-order analysis. First-order…

  15. Protein detection by Simple Western™ analysis.

    PubMed

    Harris, Valerie M

    2015-01-01

    Protein Simple© has taken a well-known protein detection method, the western blot, and revolutionized it. The Simple Western™ system uses capillary electrophoresis to identify and quantitate a protein of interest. Protein Simple© provides multiple detection apparatuses (Wes, Sally Sue, or Peggy Sue) that are suggested to save scientists valuable time by allowing the researcher to prepare the protein sample, load it along with necessary antibodies and substrates, and walk away. Within 3-5 h the protein will be separated by size, or charge, immuno-detection of target protein will be accurately quantitated, and results will be immediately made available. Using the Peggy Sue instrument, one study recently examined changes in MAPK signaling proteins in the sex-determining stage of gonadal development. Here the methodology is described.

  16. Analysis of a theoretically optimized transonic airfoil

    NASA Technical Reports Server (NTRS)

    Lores, M. E.; Burdges, K. P.; Shrewsbury, G. D.

    1978-01-01

    Numerical optimization was used in conjunction with an inviscid, full potential equation, transonic flow analysis computer code to design an upper surface contour for a conventional airfoil to improve its supercritical performance. The modified airfoil was tested in a compressible flow wind tunnel. The modified airfoil's performance was evaluated by comparison with test data for the baseline airfoil and for an airfoil developed by optimization of leading edge of the baseline airfoil. While the leading edge modification performed as expected, the upper surface re-design did not produce all of the expected performance improvements. Theoretical solutions computed using a full potential, transonic airfoil code corrected for viscosity were compared to experimental data for the baseline airfoil and the upper surface modification. These correlations showed that the theory predicted the aerodynamics of the baseline airfoil fairly well, but failed to accurately compute drag characteristics for the upper surface modification.

  17. Theoretical and methodological approaches in discourse analysis.

    PubMed

    Stevenson, Chris

    2004-10-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power. Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a frame- work for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  18. Theoretical and methodological approaches in discourse analysis.

    PubMed

    Stevenson, Chris

    2004-01-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a framework for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  19. Theoretical analysis of impact in composite plates

    NASA Technical Reports Server (NTRS)

    Moon, F. C.

    1973-01-01

    The calculated stresses and displacements induced anisotropic plates by short duration impact forces are presented. The theoretical model attempts to model the response of fiber composite turbine fan blades to impact by foreign objects such as stones and hailstones. In this model the determination of the impact force uses the Hertz impact theory. The plate response treats the laminated blade as an equivalent anisotropic material using a form of Mindlin's theory for crystal plates. The analysis makes use of a computational tool called the fast Fourier transform. Results are presented in the form of stress contour plots in the plane of the plate for various times after impact. Examination of the maximum stresses due to impact versus ply layup angle reveals that the + or - 15 deg layup angle gives lower flexural stresses than 0 deg, + or - 30 deg and + or - 45 deg. cases.

  20. Courage and nursing practice: a theoretical analysis.

    PubMed

    Lindh, Inga-Britt; Barbosa da Silva, António; Berg, Agneta; Severinsson, Elisabeth

    2010-09-01

    This article aims to deepen the understanding of courage through a theoretical analysis of classical philosophers' work and a review of published and unpublished empirical research on courage in nursing. The authors sought answers to questions regarding how courage is understood from a philosophical viewpoint and how it is expressed in nursing actions. Four aspects were identified as relevant to a deeper understanding of courage in nursing practice: courage as an ontological concept, a moral virtue, a property of an ethical act, and a creative capacity. The literature review shed light on the complexity of the concept of courage and revealed some lack of clarity in its use. Consequently, if courage is to be used consciously to influence nurses' ethical actions it seems important to recognize its specific features. The results suggest it is imperative to foster courage among nurses and student nurses to prepare them for ethical, creative action and further the development of professional nursing practices.

  1. Simple Analysis of Historical Lime Mortars

    ERIC Educational Resources Information Center

    Pires, Joa~o

    2015-01-01

    A laboratory experiment is described in which a simple characterization of a historical lime mortar is made by the determination of its approximate composition by a gravimetric method. Fourier transform infrared (FTIR) spectroscopy and X-ray diffraction (XRD) are also used for the qualitative characterization of the lime mortar components. These…

  2. Simple Analysis of Historical Lime Mortars

    ERIC Educational Resources Information Center

    Pires, Joa~o

    2015-01-01

    A laboratory experiment is described in which a simple characterization of a historical lime mortar is made by the determination of its approximate composition by a gravimetric method. Fourier transform infrared (FTIR) spectroscopy and X-ray diffraction (XRD) are also used for the qualitative characterization of the lime mortar components. These…

  3. Simple yet Hidden Counterexamples in Undergraduate Real Analysis

    ERIC Educational Resources Information Center

    Shipman, Barbara A.; Shipman, Patrick D.

    2013-01-01

    We study situations in introductory analysis in which students affirmed false statements as true, despite simple counterexamples that they easily recognized afterwards. The study draws attention to how simple counterexamples can become hidden in plain sight, even in an active learning atmosphere where students proposed simple (as well as more…

  4. Simple yet Hidden Counterexamples in Undergraduate Real Analysis

    ERIC Educational Resources Information Center

    Shipman, Barbara A.; Shipman, Patrick D.

    2013-01-01

    We study situations in introductory analysis in which students affirmed false statements as true, despite simple counterexamples that they easily recognized afterwards. The study draws attention to how simple counterexamples can become hidden in plain sight, even in an active learning atmosphere where students proposed simple (as well as more…

  5. Free radical scavenging and COX-2 inhibition by simple colon metabolites of polyphenols: A theoretical approach.

    PubMed

    Amić, Ana; Marković, Zoran; Marković, Jasmina M Dimitrić; Jeremić, Svetlana; Lučić, Bono; Amić, Dragan

    2016-12-01

    Free radical scavenging and inhibitory potency against cyclooxygenase-2 (COX-2) by two abundant colon metabolites of polyphenols, i.e., 3-hydroxyphenylacetic acid (3-HPAA) and 4-hydroxyphenylpropionic acid (4-HPPA) were theoretically studied. Different free radical scavenging mechanisms are investigated in water and pentyl ethanoate as a solvent. By considering electronic properties of scavenged free radicals, hydrogen atom transfer (HAT) and sequential proton loss electron transfer (SPLET) mechanisms are found to be thermodynamically probable and competitive processes in both media. The Gibbs free energy change for reaction of inactivation of free radicals indicates 3-HPAA and 4-HPPA as potent scavengers. Their reactivity toward free radicals was predicted to decrease as follows: hydroxyl>alkoxyls>phenoxyl≈peroxyls>superoxide. Shown free radical scavenging potency of 3-HPAA and 4-HPPA along with their high μM concentration produced by microbial colon degradation of polyphenols could enable at least in situ inactivation of free radicals. Docking analysis with structural forms of 3-HPAA and 4-HPPA indicates dianionic ligands as potent inhibitors of COX-2, an inducible enzyme involved in colon carcinogenesis. Obtained results suggest that suppressing levels of free radicals and COX-2 could be achieved by 3-HPAA and 4-HPPA indicating that these compounds may contribute to reduced risk of colon cancer development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.

    We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.

  7. Strategic Competence: Applying Siegler's Theoretical and Methodological Framework to the Domain of Simple Addition

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Verschaffel, Lieven; Ghesquiere, Pol

    2002-01-01

    In this study we investigated the variability, frequency, efficiency, and adaptiveness of young children's strategy use in the domain of simple addition by means of the choice/no-choice method. Seventy-seven beginning second-graders, divided in 3 groups according to general mathematical ability, solved a series of 25 simple additions in 3…

  8. Theoretical Analysis of the F1-ATPase Experimental Data

    PubMed Central

    Perez-Carrasco, Ruben; Sancho, J.M.

    2010-01-01

    Abstract F1-ATPase is a rotatory molecular motor fueled by ATP nucleotides. Different loads can be attached to the motor axis to show that it rotates in main discrete steps of 120° with substeps of ∼80° and 40°. Experimental data show the dependence on the mean rotational velocity ω with respect to the external control parameters: the nucleotide concentration [ATP] and the friction of the load γL. In this work we present a theoretical analysis of the experimental data whose main results are: 1), A derivation of a simple analytical formula for ω([ATP], γL) that compares favorably with experiments; 2), The introduction of a two-state flashing ratchet model that exhibits experimental phenomenology of a greater specificity than has been, to our knowledge, previously available; 3), The derivation of an argument to obtain the values of the substep sizes; 4), An analysis of the energy constraints of the model; and 5), The theoretical analysis of the coupling ratio between the ATP consumed and the success of a forward step. We also discuss the compatibility of our approach with recent experimental observations. PMID:20513403

  9. Information theoretic analysis of canny edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  10. Theoretical Analysis of Rain Attenuation Probability

    NASA Astrophysics Data System (ADS)

    Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan

    2007-07-01

    Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.

  11. Theoretical analysis of driven magnetic reconnection experiments

    NASA Astrophysics Data System (ADS)

    Uzdensky, Dmitri A.; Kulsrud, Russell M.; Yamada, Masaaki

    1996-04-01

    In this paper we present a theoretical framework for the Magnetic Reconnection Experiment (MRX) [M. Yamada et al., Bull. Am. Phys. Soc. 40, 1877 (1995)] in order to understand the basic physics of the experiment, including the effect of the external driving force, and the difference between co- and counterhelicity cases of the experiment. The problem is reduced to a one-dimensional (1-D) resistive magnetohydrodynamic (MHD) model. A special class of holonomic boundary conditions is defined, under which a unique sequence of global equilibria can be obtained, independent of the rate of reconnection. This enables one to break the whole problem into two parts: a global problem for the ideal region, and a local problem for the resistive reconnection layer. The calculations are then carried out and the global solution for the ideal region is obtained in one particular case of holonomic constraints, the so called ``constant force'' regime, for both the co- and counterhelicity cases. After the sequence of equilibria in the ideal region is found, the problem of the rate of reconnection in the resistive reconnection region is considered. This rate tells how fast the plasma proceeds through the sequence of global equilibria but does not affect the sequence itself. Based on a modified Sweet-Parker model for the reconnection layer, the reconnection rate is calculated, and the difference between the co- and counterhelicity cases, as well as the role of the external forces is demonstrated. The results from the present analysis are qualitatively consistent with the experimental data, predicting faster reconnection rate for the counterhelicity merging and yielding a positive correlation with external forcing.

  12. Theoretical analysis of sheet metal formability

    NASA Astrophysics Data System (ADS)

    Zhu, Xinhai

    Sheet metal forming processes are among the most important metal-working operations. These processes account for a sizable proportion of manufactured goods made in industrialized countries each year. Furthermore, to reduce the cost and increase the performance of manufactured products, in addition to the environmental concern, more and more light weight and high strength materials have been used as a substitute to the conventional steel. These materials usually have limited formability, thus, a thorough understanding of the deformation processes and the factors limiting the forming of sound parts is important, not only from a scientific or engineering viewpoint, but also from an economic point of view. An extensive review of previous studies pertaining to theoretical analyses of Forming Limit Diagrams (FLDs) is contained in Chapter I. A numerical model to analyze the neck evolution process is outlined in Chapter II. With the use of strain gradient theory, the effect of initial defect profile on the necking process is analyzed. In the third chapter, the method proposed by Storen and Rice is adopted to analyze the initiation of localized neck and predict the corresponding FLDs. In view of the fact that the width of the localized neck is narrow, the deformation inside the neck region is constrained by the material in the neighboring homogeneous region. The relative rotation effect may then be assumed to be small and is thus neglected. In Chapter IV, Hill's 1948 yield criterion and strain gradient theory are employed to obtain FLDs, for planar anisotropic sheet materials by using bifurcation analysis. The effects of the strain gradient coefficient c and the material anisotropic parameters R's on the orientation of the neck and FLDs are analyzed in a systematic manner and compared with experiments. In Chapter V, Hill's 79 non-quadratic yield criterion with a deformation theory of plasticity is used along with bifurcation analyses to derive a general analytical

  13. Interaction of Simple Ions with Water: Theoretical Models for the Study of Ion Hydration

    ERIC Educational Resources Information Center

    Gancheff, Jorge S.; Kremer, Carlos; Ventura, Oscar N.

    2009-01-01

    A computational experiment aimed to create and systematically analyze models of simple cation hydrates is presented. The changes in the structure (bond distances and angles) and the electronic density distribution of the solvent and the thermodynamic parameters of the hydration process are calculated and compared with the experimental data. The…

  14. Interaction of Simple Ions with Water: Theoretical Models for the Study of Ion Hydration

    ERIC Educational Resources Information Center

    Gancheff, Jorge S.; Kremer, Carlos; Ventura, Oscar N.

    2009-01-01

    A computational experiment aimed to create and systematically analyze models of simple cation hydrates is presented. The changes in the structure (bond distances and angles) and the electronic density distribution of the solvent and the thermodynamic parameters of the hydration process are calculated and compared with the experimental data. The…

  15. Functional analysis in MR urography - made simple.

    PubMed

    Khrichenko, Dmitry; Darge, Kassa

    2010-02-01

    MR urography (MRU) has proved to be a most advantageous imaging modality of the urinary tract in children, providing one-stop comprehensive morphological and functional information, without the utilization of ionizing radiation. The functional analysis of the MRU scan still requires external post-processing using relatively complex software. This has proved to be a limiting factor in widespread routine implementation of MRU functional analysis and use of MRU functional parameters similar to nuclear medicine. We present software, developed in a pediatric radiology department, that not only enables comprehensive automated functional analysis, but is also very user-friendly, fast, easily operated by the average radiologist or MR technician and freely downloadable at www.chop-fmru.com . A copy of IDL Virtual Machine is required for the installation, which is obtained at no charge at www.ittvis.com . The analysis software, known as "CHOP-fMRU," has the potential to help overcome the obstacles to widespread use of functional MRU in children.

  16. Comparisons Between Experimental Transport Analysis and Theoretical Modeling on LHD

    NASA Astrophysics Data System (ADS)

    Yamazaki, Kozo; LHD Group

    2000-10-01

    Helical plasma confinement system has a great advantage in producing steady-state high performance plasmas with built-in divertor. For the experimental analysis and predictive simulation of helical and tokamak plasmas, a simulation code TOTAL (TOroidal Transport Analysis Linkage) has been developed and is applied to the Large Helical Device (LHD, R=3.6 ~3.9m, B<3.0T ) experiments. In the LHD experiment, the global plasma confinement is ~1.5-2 times better than the well-known confinement scaling laws, and effective transport diffusivity is same order of magnitude of neoclassical ion transport with the assumption of Ti=Te. The radial electric field has been measured and roughly agrees with theoretical neoclassical values. The simple drift wave transport models are also compared with experimental values. The impurity dynamics are calculated using predictive part of the TOTAL code, and compared with the "breathing plasma" dynamics, and the role of high-Z impurity are clarified. For the analysis of high beta plasmas, local ballooning mode analysis will be added in this TOTAL code, and optimized configurations for the future MHR reactor will be searched.

  17. Comparing a simple theoretical model for protein folding with all-atom molecular dynamics simulations.

    PubMed

    Henry, Eric R; Best, Robert B; Eaton, William A

    2013-10-29

    Advances in computing have enabled microsecond all-atom molecular dynamics trajectories of protein folding that can be used to compare with and test critical assumptions of theoretical models. We show that recent simulations by the Shaw group (10, 11, 14, 15) are consistent with a key assumption of an Ising-like theoretical model that native structure grows in only a few regions of the amino acid sequence as folding progresses. The distribution of mechanisms predicted by simulating the master equation of this native-centric model for the benchmark villin subdomain, with only two adjustable thermodynamic parameters and one temperature-dependent kinetic parameter, is remarkably similar to the distribution in the molecular dynamics trajectories.

  18. Theoretical calculations of the total and ionization cross sections for electron impact on some simple biomolecules

    NASA Astrophysics Data System (ADS)

    Vinodkumar, Minaxi; Joshipura, K. N.; Limbachiya, Chetan; Mason, Nigel

    2006-08-01

    In this paper we report total cross sections (TCS), QT , total elastic cross sections, Qel , and total ionization cross section, Qion for electron impact on water, formaldehyde, formic acid, and the formyl radical from circa 15eVto2KeV . The results are compared where possible, with previous theoretical and experimental results and, in general, are found to be in good agreement. The total and elastic cross sections for HCHO, HCOOH, and CHO radical are reported.

  19. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  20. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  1. Theoretical investigation of the synchronous totally asymmetric simple exclusion process with a roundabout

    NASA Astrophysics Data System (ADS)

    Chen, X.; Zhang, Y.; Liu, Y.; Xiao, S.

    2017-02-01

    The roundabout in a one-dimensional system is studied employing the synchronous totally asymmetric simple exclusion process. At special sites far from boundaries, particles can attach to and detach from the system irreversibly with respective probabilities p and q. When the system is in a steady station, seven stationary phases are possible. The results of simulations agree well with analytical calculations. The stable state of the low-density/maximal-current/high-density phase corresponds to a critical point and may transfer to six other phases.

  2. Theoretical and experimental evidence of Fano-like resonances in simple monomode photonic circuits

    NASA Astrophysics Data System (ADS)

    Mouadili, A.; El Boudouti, E. H.; Soltani, A.; Talbi, A.; Akjouj, A.; Djafari-Rouhani, B.

    2013-04-01

    A simple photonic device consisting of two dangling side resonators grafted at two sites on a waveguide is designed in order to obtain sharp resonant states inside the transmission gaps without introducing any defects in the structure. This results from an internal resonance of the structure when such a resonance is situated in the vicinity of a zero of transmission or placed between two zeros of transmission, the so-called Fano resonances. A general analytical expression for the transmission coefficient is given for various systems of this kind. The amplitude of the transmission is obtained following the Fano form. The full width at half maximum of the resonances as well as the asymmetric Fano parameter are discussed explicitly as function of the geometrical parameters of the system. In addition to the usual asymmetric Fano resonance, we show that this system may exhibit an electromagnetic induced transparency resonance as well as well as a particular case where such resonances collapse in the transmission coefficient. Also, we give a comparison between the phase of the determinant of the scattering matrix, the so-called Friedel phase, and the phase of the transmission amplitude. The analytical results are obtained by means of the Green's function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime. These results should have important consequences for designing integrated devices such as narrow-frequency optical or microwave filters and high-speed switches. This system is proposed as a simpler alternative to coupled-micoresonators.

  3. Theoretical analysis of the EWEC report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This analytic investigation shows how the electromagnetic wave energy conversion (EWEC) device, as used for solar-to-electric power conversion, is significantly different from solar cells, with respect to principles of operation. An optimistic estimate of efficiency is about 80% for a full-wave rectifying configuration with solar radiation normally incident. This compares favorably with the theoretical maximum for a CdTe solar cell (23.5%), as well as with the efficiencies of more familiar cells: Si (19.5%), InP (21.5%), and GaAs (23%). Some key technological issues that must be resolved before the EWEC device can be realized are identified. Those issues include: the fabrication of a pn semi-conductor junction with no permittivity resonances in the optical band; and the efficient channeling of the power received by countless microscopic horn antennas through a relatively few number of wires.

  4. Theoretical and observational analysis of spacecraft fields

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.; Schatten, K. H.

    1972-01-01

    In order to investigate the nondipolar contributions of spacecraft magnetic fields a simple magnetic field model is proposed. This model consists of randomly oriented dipoles in a given volume. Two sets of formulas are presented which give the rms-multipole field components, for isotropic orientations of the dipoles at given positions and for isotropic orientations of the dipoles distributed uniformly throughout a cube or sphere. The statistical results for an 8 cu m cube together with individual examples computed numerically show the following features: Beyond about 2 to 3 m distance from the center of the cube, the field is dominated by an equivalent dipole. The magnitude of the magnetic moment of the dipolar part is approximated by an expression for equal magnetic moments or generally by the Pythagorean sum of the dipole moments. The radial component is generally greater than either of the transverse components for the dipole portion as well as for the nondipolar field contributions.

  5. [Rapid spectrochemical qualitative analysis of simple OMA system].

    PubMed

    Yang, C; Lin, Y; Xu, H; Wen, X; Lin, L

    1997-10-01

    Simple optical multichannel analyzer (OMA) system which is formed by ordinary one dimension CCD camera, spectrograph and microcomputer was introduced in this paper. Spectrochemical analysis feasible of this OMA system was studied. The applied software programed, by us can realize wavelength calibration, spectral line identification and give out automatically the results of qualitative and semidefinite quantity analysis.

  6. Category Theoretic Analysis of Hierarchical Protein Materials and Social Networks

    PubMed Central

    Spivak, David I.; Giesa, Tristan; Wood, Elizabeth; Buehler, Markus J.

    2011-01-01

    Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we describe an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a “concept web” or “semantic network” except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other olog. We consider simple cases of beta-helical and amyloid-like protein filaments subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog for the social network feature identical category-theoretic representations, and we proceed to precisely explicate the analogy or isomorphism between them. The examples presented here demonstrate that the intrinsic nature of a complex system, which in particular includes a precise relationship between structure and function at different hierarchical levels, can be effectively represented by an olog. This, in turn, allows for comparative studies between disparate materials or fields of application, and results in novel approaches to derive functionality in the design of de novo hierarchical systems. We discuss opportunities and challenges associated with the description of complex biological materials by using ologs as a powerful tool for analysis and design in the context of materiomics, and we present the potential impact of this approach for engineering, life sciences, and medicine. PMID:21931622

  7. A theoretical analysis of basin-scale groundwater temperature distribution

    NASA Astrophysics Data System (ADS)

    An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2015-03-01

    The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814

  8. A Simple Theoretical Model for the Mean Rainfall Field of Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    Langousis, A.; Veneziano, D.

    2006-12-01

    We develop a simple model for the mean rainfall intensity profile in tropical cyclones (TCs) before landfall. The model assumes that rainfall is caused primarily by condensation of the humid outflow at the top of the TC boundary layer. This upward-directed flux originates from convergence of the horizontal winds in the boundary layer. The model combines Holland's (1980) representation of the tangential wind speed in the main vortex, an Ekman-type solution for the horizontal and vertical wind profiles inside the TC boundary layer, and moist air thermodynamics to estimate how the mean rainrate i depends on radial distance r from the low pressure center and azimuth b relative to the direction of motion. We start by studying the axisymmetric component i(r), which is also the mean rainrate profile for zero translational velocity. i(r) depends on the maximum pressure deficit DP (or the maximum tangential wind speed Vmax), Holland's B parameter, the radius of maximum winds Rwind, and the depth-averaged temperature in the boundary layer T. The mean rainrate is zero for r = 0, increases to a maximum imax at a distance Rrain somewhat larger than Rwind, and then decays to zero in an almost exponential way. More intense cyclones tend to have lower Rrain and higher imax. The difference Rrain-Rwind is higher for tangential wind profiles that are more picked around Rwind. Such wind profiles are generally associated with more intense cyclones and higher B values. When cyclones in the Northern hemisphere move, the mean rainrate intensifies in the north-east quadrant relative to the direction of motion and de-intesifies in the south-west quadrant. These azimuthal effects are stronger for faster-moving storms. For preliminary validation, we compare model estimates of i(r) under representative parameters with ensemble averages from 548 CAT12 and 212 CAT35 TCs extracted from the TRMM dataset (Lonfat et al., Mon. Wea. Rev. 132 (2004): 1645-1660). The model reproduces very well the shape

  9. A theoretical analysis of the electrogastrogram (EGG).

    PubMed

    Calder, Stefan; Cheng, Leo K; Peng Du

    2014-01-01

    In this study, a boundary element model was developed to investigate the relationship between the gastric electrical activity, also known as slow waves, and the electrogastrogram (EGG). A dipole was calculated to represent the equivalent net activity of gastric slow waves. The dipole was then placed in an anatomically-realistic torso model to simulate EGG. The torso model was constructed from a laser-scanned geometry of an adult male torso phantom with 190 electrode sites equally distributed around the torso so that simulated EGG could be directly compared between the physical model and the mathematical model. The results were analyzed using the Fast Fourier Transforms (FFT), spatial distribution of EGG potential and a resultant EGG based on a 3-lead configuration. The FFT results showed both the dipole and EGG contained identical dominant frequency component of 3 cycles per minute (cpm), with this result matching known physiological phenomenon. The -3 dB point of the EGG was 110 mm from the region directly above the dipole source. Finally, the results indicated that electrode coupling could theoretically be used in a similar fashion to ECG coupling to gain greater understanding of how EGG correlate to gastric slow waves.

  10. A theoretical analysis of vertical flow equilibrium

    SciTech Connect

    Yortsos, Y.C.

    1992-01-01

    The assumption of Vertical Flow Equilibrium (VFE) and of parallel flow conditions, in general, is often applied to the modeling of flow and displacement in natural porous media. However, the methodology for the development of the various models is rather intuitive, and no rigorous method is currently available. In this paper, we develop an asymptotic theory using as parameter the variable R{sub L} = (L/H){radical}(k{sub V})/(k{sub H}). It is rigorously shown that present models represent the leading order term of an asymptotic expansion with respect to 1/R{sub L}{sup 2}. Although this was numerically suspected, it is the first time that is is theoretically proved. Based on the general formulation, a series of models are subsequently obtained. In the absence of strong gravity effects, they generalize previous works by Zapata and Lake (1981), Yokoyama and Lake (1981) and Lake and Hirasaki (1981), on immiscible and miscible displacements. In the limit of gravity-segregated flow, we prove conditions for the fluids to be segregated and derive the Dupuit and Dietz (1953) approximations. Finally, we also discuss effects of capillarity and transverse dispersion.

  11. Catalytic efficiency of enzymes: a theoretical analysis.

    PubMed

    Hammes-Schiffer, Sharon

    2013-03-26

    This brief review analyzes the underlying physical principles of enzyme catalysis, with an emphasis on the role of equilibrium enzyme motions and conformational sampling. The concepts are developed in the context of three representative systems, namely, dihydrofolate reductase, ketosteroid isomerase, and soybean lipoxygenase. All of these reactions involve hydrogen transfer, but many of the concepts discussed are more generally applicable. The factors that are analyzed in this review include hydrogen tunneling, proton donor-acceptor motion, hydrogen bonding, pKa shifting, electrostatics, preorganization, reorganization, and conformational motions. The rate constant for the chemical step is determined primarily by the free energy barrier, which is related to the probability of sampling configurations conducive to the chemical reaction. According to this perspective, stochastic thermal motions lead to equilibrium conformational changes in the enzyme and ligands that result in configurations favorable for the breaking and forming of chemical bonds. For proton, hydride, and proton-coupled electron transfer reactions, typically the donor and acceptor become closer to facilitate the transfer. The impact of mutations on the catalytic rate constants can be explained in terms of the factors enumerated above. In particular, distal mutations can alter the conformational motions of the enzyme and therefore the probability of sampling configurations conducive to the chemical reaction. Methods such as vibrational Stark spectroscopy, in which environmentally sensitive probes are introduced site-specifically into the enzyme, provide further insight into these aspects of enzyme catalysis through a combination of experiments and theoretical calculations.

  12. A theoretical analysis of vertical flow equilibrium

    SciTech Connect

    Yortsos, Y.C.

    1992-01-01

    The assumption of Vertical Flow Equilibrium (VFE) and of parallel flow conditions, in general, is often applied to the modeling of flow and displacement in natural porous media. However, the methodology for the development of the various models is rather intuitive, and no rigorous method is currently available. In this paper, we develop an asymptotic theory using as parameter the variable R{sub L} = (L/H){radical}(k{sub V})/(k{sub H}). It is rigorously shown that present models represent the leading order term of an asymptotic expansion with respect to 1/R{sub L}{sup 2}. Although this was numerically suspected, it is the first time that is is theoretically proved. Based on the general formulation, a series of models are subsequently obtained. In the absence of strong gravity effects, they generalize previous works by Zapata and Lake (1981), Yokoyama and Lake (1981) and Lake and Hirasaki (1981), on immiscible and miscible displacements. In the limit of gravity-segregated flow, we prove conditions for the fluids to be segregated and derive the Dupuit and Dietz (1953) approximations. Finally, we also discuss effects of capillarity and transverse dispersion.

  13. Landscape analysis: Theoretical considerations and practical needs

    USGS Publications Warehouse

    Godfrey, A.E.; Cleaves, E.T.

    1991-01-01

    Numerous systems of land classification have been proposed. Most have led directly to or have been driven by an author's philosophy of earth-forming processes. However, the practical need of classifying land for planning and management purposes requires that a system lead to predictions of the results of management activities. We propose a landscape classification system composed of 11 units, from realm (a continental mass) to feature (a splash impression). The classification concerns physical aspects rather than economic or social factors; and aims to merge land inventory with dynamic processes. Landscape units are organized using a hierarchical system so that information may be assembled and communicated at different levels of scale and abstraction. Our classification uses a geomorphic systems approach that emphasizes the geologic-geomorphic attributes of the units. Realm, major division, province, and section are formulated by subdividing large units into smaller ones. For the larger units we have followed Fenneman's delineations, which are well established in the North American literature. Areas and districts are aggregated into regions and regions into sections. Units smaller than areas have, in practice, been subdivided into zones and smaller units if required. We developed the theoretical framework embodied in this classification from practical applications aimed at land use planning and land management in Maryland (eastern Piedmont Province near Baltimore) and Utah (eastern Uinta Mountains). ?? 1991 Springer-Verlag New York Inc.

  14. LHD Plasma Modeling and Theoretical Analysis

    NASA Astrophysics Data System (ADS)

    Yamazaki, Kozo; Nakajima, Noriyoshi; Murakami, Sadayoshi; Yokoyama, Masayuki

    The transport/heating modeling and equilibrium/stability analysis have been carried out for LHD (Large Helical Device) plasmas. A new simulation code TOTAL (TOroidal Transport Analysis Linkage) is developed, which consists of the 3-dimensional equilibrium code VMEC including bootstrap current and 1-dimensional transport code HTRANS including helical-ripple transport determined as well as anomalous transport. This code clarified the favorable effect of bootstrap current on the neoclassical confinement in LHD. The 3-dimensional stability analysis using CAS3D code has been done and clarified the ballooning mode structure peculiar to the LHD high-beta plasmas. The 5-dimensional simulation code has been developed to analyze the NBI or ECH heating power depositions in LHD plasmas, and the particle orbit effects of high-energy particles are clarified. The plasma rotation analysis is also carried out related to the possibility of the electric-field transition and the plasma confinement improvement in LHD.

  15. Theoretical analysis of HVAC duct hanger systems

    NASA Technical Reports Server (NTRS)

    Miller, R. D.

    1987-01-01

    Several methods are presented which, together, may be used in the analysis of duct hanger systems over a wide range of frequencies. The finite element method (FEM) and component mode synthesis (CMS) method are used for low- to mid-frequency range computations and have been shown to yield reasonably close results. The statistical energy analysis (SEA) method yields predictions which agree with the CMS results for the 800 to 1000 Hz range provided that a sufficient number of modes participate. The CMS approach has been shown to yield valuable insight into the mid-frequency range of the analysis. It has been demonstrated that it is possible to conduct an analysis of a duct/hanger system in a cost-effective way for a wide frequency range, using several methods which overlap for several frequency bands.

  16. Theoretical and experimental analysis of the physics of water rockets

    NASA Astrophysics Data System (ADS)

    Barrio-Perotti, R.; Blanco-Marigorta, E.; Fernández-Francos, J.; Galdo-Vega, M.

    2010-09-01

    A simple rocket can be made using a plastic bottle filled with a volume of water and pressurized air. When opened, the air pressure pushes the water out of the bottle. This causes an increase in the bottle momentum so that it can be propelled to fairly long distances or heights. Water rockets are widely used as an educational activity, and several mathematical models have been proposed to investigate and predict their physics. However, the real equations that describe the physics of the rockets are so complicated that certain assumptions are usually made to obtain models that are easier to use. These models provide relatively good predictions but fail in describing the complex physics of the flow. This paper presents a detailed theoretical analysis of the physics of water rockets that concludes with the proposal of a physical model. The validity of the model is checked by a series of field tests. The tests showed maximum differences with predictions of about 6%. The proposed model is finally used to investigate the temporal evolution of some significant variables during the propulsion and flight of the rocket. The experience and procedure described in this paper can be proposed to graduate students and also at undergraduate level if certain simplifications are assumed in the general equations.

  17. Gender and Physics: A Theoretical Analysis.

    ERIC Educational Resources Information Center

    Rolin, Kristina

    2001-01-01

    Argues that objections raised by Koertge, Gross and Levitt, and Weinberg against feminist scholarship are unwarranted. The concept of gender, as it has been developed in feminist theory, is key to understanding why the first objection is misguided. Social analysis of scientific knowledge is key to understanding why the second and third objections…

  18. Theoretical Analysis of a Model for a Field Displacement Isolator

    DTIC Science & Technology

    1976-06-01

    model for a field displacement isolator. Sharon, Ram Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/17975 Downloaded from...NPS Archive: Calhoun THEORETICAL ANALYSIS OF A MODEL FOR A FIELD DISPLACEMENT ISOLATOR Ram Sharon NAVAL POSTGRADUATE SCHOOL Monterey, California...THESIS Theoretical Analysis of a Model for a Field Displacement Isolator by Ram Sharon June 1976 Thesis Advisor: J. B. Knorr Approved for public release

  19. Active disturbance rejection control: methodology and theoretical analysis.

    PubMed

    Huang, Yi; Xue, Wenchao

    2014-07-01

    The methodology of ADRC and the progress of its theoretical analysis are reviewed in the paper. Several breakthroughs for control of nonlinear uncertain systems, made possible by ADRC, are discussed. The key in employing ADRC, which is to accurately determine the "total disturbance" that affects the output of the system, is illuminated. The latest results in theoretical analysis of the ADRC-based control systems are introduced.

  20. Empirical and theoretical analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan

    structures evolve on a similar timescale to individual level transmission, we investigated the process of transmission through a model population comprising of social groups which follow simple dynamical rules for growth and break-up, and the profiles produced bear a striking resemblance to empirical data obtained from social, financial and biological systems. Finally, for better implementation of a widely accepted power law test algorithm, we have developed a fast testing procedure using parallel computation.

  1. A simple, sensitive graphical method of treating thermogravimetric analysis data

    Treesearch

    Abraham Broido

    1969-01-01

    Thermogravimetric Analysis (TGA) is finding increasing utility in investigations of the pyrolysis and combustion behavior of materuals. Although a theoretical treatment of the TGA behavior of an idealized reaction is relatively straight-forward, major complications can be introduced when the reactions are complex, e.g., in the pyrolysis of cellulose, and when...

  2. Medial Cochlear Efferent Function: A Theoretical Analysis

    NASA Astrophysics Data System (ADS)

    Mountain, David C.

    2011-11-01

    Since the discovery of the cochlear efferent system, many hypotheses have been put forth for its function. These hypotheses for its function range from protecting the cochlea from over stimulation to improving the detection of sounds in noise. It is known that the medial efferent system innervates the outer hair cells and that stimulation of this system reduces basilar membrane and auditory nerve sensitivity which suggests that this system acts to decrease the gain of the cochlear amplifier. Here I present modeling results as well as analysis of published experimental data that suggest that the function of the medial efferent reflex is to decrease the cochlear amplifier gain by just the right amount so that the nonlinearity in the basilar membrane response lines up perfectly with the inner hair cell nonlinear transduction process to produce a hair cell receptor potential that is proportional to the logarithm of the sound pressure level.

  3. Optical pumping spectroscopy of Rb vapour with co-propagating laser beams: line identification by a simple theoretical model

    NASA Astrophysics Data System (ADS)

    Krmpot, Aleksandar J.; Rabasović, Mihailo D.; Jelenković, Branislav M.

    2010-07-01

    In this paper the saturation spectra of rubidium vapour at room temperature, obtained with overlapped co-propagating laser beams, were examined. Unlike the standard saturation spectroscopy, here the transmission of the pump laser beam was detected. The pump laser was locked to an atomic transition of the D2 line, while the probe laser frequency was scanned in a wide frequency range. The pump and probe beams had approximately the same intensities; thus the probe laser can saturate transitions and contribute to optical pumping. This, together with Doppler broadening, leads to rich pump transmission spectra, with many lines appearing due to the interaction of lasers with atoms in different velocity groups. The advantages of this method are well-resolved structures and appearance of spectral lines on a flat, Doppler-free background. Agreement between experimental and theoretical results shows the usefulness of this simple model, based on the rate equations, for identification of lines and determination of relative contribution to the observed line intensity from atoms with different velocities. Theoretical spectra are a useful tool for the calibration of experimental spectra obtained by a nonlinear dependence of the laser frequency on the voltage applied to the piezo used for the laser diode frequency scanning.

  4. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant.

  5. Graph theoretical analysis of climate data

    NASA Astrophysics Data System (ADS)

    Zerenner, T.; Hense, A.

    2012-04-01

    Applying methods from graph and network theory to climatological data is a quite new approach and contains numerous difficulties. The atmosphere is a high dimensional and complex dynamical system which per se does not show a network-like structure. It does not consist of well-defined nodes and edges. Thus considering such a system as a network or graph inevitably involves radical simplifications and ambiguities. Nevertheless network analysis has provided useful results for different kinds of complex systems for example in biology or medical science (neural and gene interaction networks). The application of these methods on climate data provides interesting results as well. If the network construction is based on the correlation matrix of the underlying data, the resulting network structures show many well known patterns and characteristics of the atmospheric circulation (Tsonis et al. 2006, Donges et al. 2009). The interpretation of these network structures is yet questionable. Using Pearson Correlation for network construction does not allow to differ between direct and indirect dependencies. An edge does not necessarily represent a causal connection. An interpretation of these structures for instance concerning the stability of the climate system is therefore doubtful. Gene interaction networks for example are often constructed using partial correlations (Wu et al. 2003), which makes it possible to distinguish between direct and indirect dependencies. Although a high value of partial correlation does not guarantee causality it is a step in the direction of measuring causal dependencies. This approach is known as Gaussian Graphical Models, GGMs. For high dimensional datasets such as climate data partial correlations can be obtained by calculating the precision matrix, the inverse covariance matrix. Since the maximum likelihood estimates of covariance matrices of climate datasets are singular the precision matrices can only be estimated for example by using the

  6. A systematic theoretical study of harmonic vibrational frequencies: The ammonium ion NH4+ and other simple molecules

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yukio; Schaefer, Henry F.

    1980-09-01

    Analytic gradient techniques have been used to predict the harmonic vibrational frequencies of HCN, H2CO, H2O, CH4 and NH4+ at several levels of molecular electronic structure theory. Basis sets of double zeta, double zeta plus polarization, and extended plus polarization quality have been used in conjunction with self-consistent-field and configuration interaction methods. For the four spectroscopically characterized molecules, comparison with theory is particularly appropriate because experimental harmonic frequencies are available. For the 16 vibrational frequencies thus considered, the DZ SCF level of theory yields average errors of 166 cm-1 or 8.0%. The DZ+P SCF results are of comparable accuracy, differing on the average from experiment by 176 cm-1 or 8.3%. With the extended basis set, the comparable SCF frequency errors are only slightly less. The explicit incorporation of correlation effects qualitatively improves the agreement between theoretical and experimental harmonic vibrational frequencies. The DZ CI frequencies differ on the average by only 44 cm-1 or 2.0%. Perhaps surprisingly, the use of larger basis sets in conjunction with CI including all singly and doubly excited configurations leads to larger average errors in the vibrational frequencies. For example, the DZ+P CI frequencies have average errors of 80 cm-1 or 3.5%. Thus it seems clear that higher excitations (probably unlinked clusters especially) have a significant effect (order of 50 cm-1) on the theoretical prediction of polyatomic vibrational frequencies. The apparent discrepancy between the theoretical and experimental equilibrium geometry of CH4 is resolved here, and shown to have been a simple consequence of basis set incompleteness. Finally, the gas phase NH4+ equilibrium bond distance is predicted to be 1.022 Å, or 0.01-0.02 Å shorter than found by Ibers and Stevenson for NH4+ in crystalline NH4Cl and NH4F.

  7. Unusual Inorganic Biradicals: A Theoretical Analysis

    SciTech Connect

    Miliordos, Evangelos; Ruedenberg, Klaus; Xantheas, Sotiris S.

    2013-05-27

    Triatomic ions in the series FX2+, where X = O, S, Se, Te and Po are the terminal atoms, exhibit unusually high biradical characters (0.76 < β < 0.92), as measured from the analysis of Multi-Reference Configuration Interaction (MRCI) wavefunctions. Candidates in this series have the largest biradical character among the homologous, 18 valence electron CX22-, NX2-, X3 and OX2 (X = O, S, Se, Te and Po) systems. In the same scale the biradical character of ozone (O3) is just 0.19, whereas that of trimethylenemethane [C(CH2)3] is 0.97 (β=1 for an "ideal" biradical). For the 24 electron XO2 series, consisting of molecules with two oxygen atoms and a moiety X that is isoelectronic to oxygen, i.e. X= CH2, NH, O, F+, the singlet (S) state is lower than the triplet (T) one and the S-T splitting as well the barrier between their "open" and "ring" configurations was found to depend linearly with the inverse of the biradical character.

  8. Gender and Physics: a Theoretical Analysis

    NASA Astrophysics Data System (ADS)

    Rolin, Kristina

    This article argues that the objections raised by Koertge (1998), Gross and Levitt (1994), and Weinberg (1996) against feminist scholarship on gender and physics are unwarranted. The objections are that feminist science studies perpetuate gender stereotypes, are irrelevant to the content of physics, or promote epistemic relativism. In the first part of this article I argue that the concept of gender, as it has been developed in feminist theory, is a key to understanding why the first objection is misguided. Instead of reinforcing gender stereotypes, feminist science studies scholars can formulate empirically testable hypotheses regarding local and contested beliefs about gender. In the second part of this article I argue that a social analysis of scientific knowledge is a key to understanding why the second and the third objections are misguided. The concept of gender is relevant for understanding the social practice of physics, and the social practice of physics can be of epistemic importance. Instead of advancing epistemic relativism, feminist science studies scholars can make important contributions to a subfield of philosophy called social epistemology.

  9. Development of Novel, Simple Multianalyte Sensors for Remote Environmental Analysis

    SciTech Connect

    Professor Sanford A. Asher

    2003-02-18

    Advancement of our polymerized crystalline colloidal array chemical sensing technology. They have dramatically advanced their polymerized crystalline colloidal array chemical sensing technology. They fabricated nonselective sensors for determining pH and ionic strength. They also developed selective sensors for glucose and organophosphorus mimics of nerve gas agents. They developed a trace sensor for cations in water which utilized a novel crosslinking sensing motif. In all of these cases they have been able to theoretically model their sensor response by extending hydrogel volume phase transition theory. They also developed transient sampling methods to allow their ion sensing methods to operate at high ionic strengths. They also developed a novel optrode to provide for simple sampling.

  10. The free energy difference between simple models of B- and Z-DNA: Computer simulation and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Gil Montoro, J. C.; Abascal, J. L. F.

    1997-05-01

    A method recently proposed to calculate by computer simulation the relative free energy between two conformational states of a polyelectrolyte is used for the case of the salt induced B- to Z-DNA transition. In this method, the calculation of the free energy may be split in two steps, one corresponding to the setup of the uncharged conformer in solution while the other one is the charging process of such a structure. Following the description of the method, simulations are reported to compute the free energy difference between the above mentioned DNA conformers in presence of monovalent added salt. We use a simple DNA solution model—the DNA is represented by charged spheres at the canonical positions of the phosphate groups, water by a dielectric continuum of appropriate permittivity and counterions and coions are modeled as soft spheres of equal ionic radius—for which theoretical approximations have been proposed. It is seen that the charging term is much more important than the setup contribution at any of the investigated salt concentrations. The variation of the free energy of each conformer as a function of the added NaCl concentration has been calculated. Both the B and Z conformers increase noticeably their stabilities with higher salt concentrations but the effect is more pronounced for the latter. As a consequence, the relative population of B-DNA, which is clearly prevalent at moderate ionic strengths, decreases with the addition of salt. However, up to 4.3 M NaCl a B→Z transition is not predicted for this DNA solution model. Additionally, the theoretical calculations are checked for the first time against computer simulation results. In particular, we have tried to assess the foundations and predictive ability of (especially) the Soumpasis potential of mean force theory and, in a lesser extent, the counterion condensation theory of Manning and the polymer reference interaction site model theory of Hirata and Levy.

  11. Simple enrichment and analysis of plasma lysophosphatidic acids.

    PubMed

    Wang, Jialu; Sibrian-Vazquez, Martha; Escobedo, Jorge O; Lowry, Mark; Wang, Lei; Chu, Yu-Hsuan; Moore, Richard G; Strongin, Robert M

    2013-11-21

    A simple and highly efficient technique for the analysis of lysophosphatidic acid (LPA) subspecies in human plasma is described. The streamlined sample preparation protocol furnishes the five major LPA subspecies with excellent recoveries. Extensive analysis of the enriched sample reveals only trace levels of other phospholipids. This level of purity not only improves MS analyses, but enables HPLC post-column detection in the visible region with a commercially available fluorescent phospholipids probe. Human plasma samples from different donors were analyzed using the above method and validated by LC-ESI/MS/MS.

  12. Morphometric analysis of a fresh simple crater on the Moon.

    NASA Astrophysics Data System (ADS)

    Vivaldi, V.; Ninfo, A.; Massironi, M.; Martellato, E.; Cremonese, G.

    In this research we are proposing an innovative method to determine and quantify the morphology of a simple fresh impact crater. Linné is a well preserved impact crater of 2.2 km in diameter, located at 27.7oN 11.8oE, near the western edge of Mare Serenitatis on the Moon. The crater was photographed by the Lunar Orbiter and the Apollo space missions. Its particular morphology may place Linné as the most striking example of small fresh simple crater. Morphometric analysis, conducted on recent high resolution DTM from LROC (NASA), quantitatively confirmed the pristine morphology of the crater, revealing a clear inner layering which highlight a sequence of lava emplacement events.

  13. Path Analysis Tests of Theoretical Models of Children's Memory Performance

    ERIC Educational Resources Information Center

    DeMarie, Darlene; Miller, Patricia H.; Ferron, John; Cunningham, Walter R.

    2004-01-01

    Path analysis was used to test theoretical models of relations among variables known to predict differences in children's memory--strategies, capacity, and metamemory. Children in kindergarten to fourth grade (chronological ages 5 to 11) performed different memory tasks. Several strategies (i.e., sorting, clustering, rehearsal, and self-testing)…

  14. Theoretical Notes on the Sociological Analysis of School Reform Networks

    ERIC Educational Resources Information Center

    Ladwig, James G.

    2014-01-01

    Nearly two decades ago, Ladwig outlined the theoretical and methodological implications of Bourdieu's concept of the social field for sociological analyses of educational policy and school reform. The current analysis extends this work to consider the sociological import of one of the most ubiquitous forms of educational reform found around…

  15. Theoretical Notes on the Sociological Analysis of School Reform Networks

    ERIC Educational Resources Information Center

    Ladwig, James G.

    2014-01-01

    Nearly two decades ago, Ladwig outlined the theoretical and methodological implications of Bourdieu's concept of the social field for sociological analyses of educational policy and school reform. The current analysis extends this work to consider the sociological import of one of the most ubiquitous forms of educational reform found around…

  16. Mode Deactivation Therapy (MDT) Family Therapy: A Theoretical Case Analysis

    ERIC Educational Resources Information Center

    Apsche, J. A.; Ward Bailey, S. R.

    2004-01-01

    This case study presents a theoretical analysis of implementing mode deactivation therapy (MDT) (Apsche & Ward Bailey, 2003) family therapy with a 13 year old Caucasian male. MDT is a form of cognitive behavioral therapy (CBT) that combines the balance of dialectical behavior therapy (DBT) (Linehan, 1993), the importance of perception from…

  17. Bioimpedance Analysis: A Guide to Simple Design and Implementation

    PubMed Central

    Aroom, Kevin R.; Harting, Matthew T.; Cox, Charles S.; Radharkrishnan, Ravi S.; Smith, Carter; Gill, Brijesh S.

    2013-01-01

    Background Bioimpedance analysis has found utility in many fields of medical research, yet instrumentation can be expensive and/or complicated to build. Advancements in electronic component design and equipment allow for simple bioimpedance analysis using equipment now commonly found in an engineering lab, combined with a few components exclusive to impedance analysis. Materials and methods A modified Howland bridge circuit was designed on a small circuit board with connections for power and bioimpedance probes. A programmable function generator and an oscilloscope were connected to a laptop computer and were tasked to drive and receive data from the circuit. The software then parsed the received data and inserted it into a spreadsheet for subsequent data analysis. The circuit was validated by testing its current output over a range of frequencies and comparing measured values of impedance across a test circuit to expected values. Results The system was validated over frequencies between 1 and 100 kHz. Maximum fluctuation in current was on the order of micro-Amperes. Similarly, the measured value of impedance in a test circuit followed the pattern of actual impedance over the range of frequencies measured. Conclusions Contemporary generation electronic measurement equipment provides adequate levels of connectivity and programmability to rapidly measure and record data for bioimpedance research. These components allow for the rapid development of a simple but accurate bioimpedance measurement system that can be assembled by individuals with limited knowledge of electronics or programming. PMID:18805550

  18. Theoretical and Experimental Investigation of Random Gust Loads Part I : Aerodynamic Transfer Function of a Simple Wing Configuration in Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Hakkinen, Raimo J; Richardson, A S , Jr

    1957-01-01

    Sinusoidally oscillating downwash and lift produced on a simple rigid airfoil were measured and compared with calculated values. Statistically stationary random downwash and the corresponding lift on a simple rigid airfoil were also measured and the transfer functions between their power spectra determined. The random experimental values are compared with theoretically approximated values. Limitations of the experimental technique and the need for more extensive experimental data are discussed.

  19. Theoretical analysis of subwavelength high contrast grating reflectors.

    PubMed

    Karagodsky, Vadim; Sedgwick, Forrest G; Chang-Hasnain, Connie J

    2010-08-02

    A simple analytic analysis of the ultra-high reflectivity feature of subwavelength dielectric gratings is developed. The phenomenon of ultra high reflectivity is explained to be a destructive interference effect between the two grating modes. Based on this phenomenon, a design algorithm for broadband grating mirrors is suggested.

  20. Multiresolution analysis over simple graphs for brain computer interfaces

    NASA Astrophysics Data System (ADS)

    Asensio-Cubero, J.; Gan, J. Q.; Palaniappan, R.

    2013-08-01

    Objective. Multiresolution analysis (MRA) offers a useful framework for signal analysis in the temporal and spectral domains, although commonly employed MRA methods may not be the best approach for brain computer interface (BCI) applications. This study aims to develop a new MRA system for extracting tempo-spatial-spectral features for BCI applications based on wavelet lifting over graphs. Approach. This paper proposes a new graph-based transform for wavelet lifting and a tailored simple graph representation for electroencephalography (EEG) data, which results in an MRA system where temporal, spectral and spatial characteristics are used to extract motor imagery features from EEG data. The transformed data is processed within a simple experimental framework to test the classification performance of the new method. Main Results. The proposed method can significantly improve the classification results obtained by various wavelet families using the same methodology. Preliminary results using common spatial patterns as feature extraction method show that we can achieve comparable classification accuracy to more sophisticated methodologies. From the analysis of the results we can obtain insights into the pattern development in the EEG data, which provide useful information for feature basis selection and thus for improving classification performance. Significance. Applying wavelet lifting over graphs is a new approach for handling BCI data. The inherent flexibility of the lifting scheme could lead to new approaches based on the hereby proposed method for further classification performance improvement.

  1. Game theoretic analysis of physical protection system design

    SciTech Connect

    Canion, B.; Schneider, E.; Bickel, E.; Hadlock, C.; Morton, D.

    2013-07-01

    The physical protection system (PPS) of a fictional small modular reactor (SMR) facility have been modeled as a platform for a game theoretic approach to security decision analysis. To demonstrate the game theoretic approach, a rational adversary with complete knowledge of the facility has been modeled attempting a sabotage attack. The adversary adjusts his decisions in response to investments made by the defender to enhance the security measures. This can lead to a conservative physical protection system design. Since defender upgrades were limited by a budget, cost benefit analysis may be conducted upon security upgrades. One approach to cost benefit analysis is the efficient frontier, which depicts the reduction in expected consequence per incremental increase in the security budget.

  2. Theoretical considerations and a simple method for measuring alkalinity and acidity in low-pH waters by gran titration

    USGS Publications Warehouse

    Barringer, J.L.; Johnsson, P.A.

    1996-01-01

    Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.

  3. A simple method for positional analysis of phosphatidylcholine.

    PubMed

    Kiełbowicz, Grzegorz; Gładkowski, Witold; Chojnacka, Anna; Wawrzeńczyk, Czesław

    2012-12-15

    Simple and fast method of positional analysis of fatty acid composition of phosphatidylcholine (PC) from egg-yolk and soy has been elaborated. The key step of the procedure was complete ethanolysis of PC catalyzed by sn-1,3 specific lipase from Mucor miehei (Lipozyme). 2-Acyl-lysophosphatidylcholine (2-acyl LPC), fatty acids ethyl esters (FAEEs) and free fatty acids (FAs) were formed in this process. No acyl migration was observed during the reaction. The products were entirely separated from the products mixture by simple extraction in water:hexane (2:3 v/v) system. The hexane fraction containing free FAs and FAEEs was treated with BF(3)/Et(2)O in ethanol to obtain only FAEEs. The analysis of FAEEs by GC gave the composition of the FAs in the sn-1 position of the PC. 2-Acyl LPC from water fraction after precipitation in cold (-20°C) acetone was converted into FAEEs and analyzed by gas chromatography (GC) to determine FAs composition in the sn-2 position of the PC.

  4. A Simple Pile-up Model for Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Sevilla, Diego J. R.

    2017-07-01

    In this paper, a simple pile-up model is presented. This model calculates the probability P(n| N) of having n counts if N particles collide with a sensor during an exposure time. Through some approximations, an analytic expression depending on only one parameter is obtained. This parameter characterizes the pile-up magnitude, and depends on features of the instrument and the source. The statistical model obtained permits the determination of probability distributions of measured counts from the probability distributions of incoming particles, which is valuable for time series analysis. Applicability limits are discussed, and an example of the improvement that can be achieved in the statistical analysis considering the proposed pile-up model is shown by analyzing real data.

  5. Theoretical Analysis of Heuristic Search Methods for Online POMDPs.

    PubMed

    Ross, Stéphane; Pineau, Joelle; Chaib-Draa, Brahim

    2008-01-01

    Planning in partially observable environments remains a challenging problem, despite significant recent advances in offline approximation techniques. A few online methods have also been proposed recently, and proven to be remarkably scalable, but without the theoretical guarantees of their offline counterparts. Thus it seems natural to try to unify offline and online techniques, preserving the theoretical properties of the former, and exploiting the scalability of the latter. In this paper, we provide theoretical guarantees on an anytime algorithm for POMDPs which aims to reduce the error made by approximate offline value iteration algorithms through the use of an efficient online searching procedure. The algorithm uses search heuristics based on an error analysis of lookahead search, to guide the online search towards reachable beliefs with the most potential to reduce error. We provide a general theorem showing that these search heuristics are admissible, and lead to complete and ε-optimal algorithms. This is, to the best of our knowledge, the strongest theoretical result available for online POMDP solution methods. We also provide empirical evidence showing that our approach is also practical, and can find (provably) near-optimal solutions in reasonable time.

  6. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  7. Theoretical structure of adolescent alienation: a multigroup confirmatory factor analysis.

    PubMed

    Lacourse, Eric; Villeneuve, Martine; Claes, Michel

    2003-01-01

    This study examined the construct validity of adolescent alienation using second-order confirmatory factor analysis of the five dimensions conceptualized by Seeman (1959). Analysis was based on data from 275 high school students aged 14 to 18. The hypothesized multidimensionality of the construct was confirmed for both boys and girls using a second-order factor labeled alienation. Central dimensions of alienation as a latent construct were self-estrangement and powerlessness. Social isolation, meaninglessness, and especially normlessness were poorly explained by the second-order factor, suggesting that these dimensions entail enough specificity to be considered separately. A different theoretical model relating these dimensions is suggested and discussed.

  8. Simple analysis and design of annular ring microstrip antennas

    NASA Astrophysics Data System (ADS)

    El-Khamy, S. E.; El-Awadi, R. M.; El-Sharrawy, E.-B. A.

    1986-06-01

    A simple analysis of thin annular-ring microstrip antennas (AR-MSA), along with a design technique that yields the optimum ring dimensions which maximizes the radiation efficiency and the bandwidth, is presented in this paper. Using the cavity model, exact closed form solutions for the radiation fields are derived. The antenna fields distribution, resonance dimensions, radiation patterns, directivity, radiation conductance, quality factor and bandwidth are investigated for the different TMnm modes. AR-MSAs operated at the high order TMn2 modes are found to have better radiation properties and broader bandwidths than the corresponding disk-MSAs. A design table for the optimum ring dimensions for different types of the dielectric substrate material is also given in the paper.

  9. Python for information theoretic analysis of neural data.

    PubMed

    Ince, Robin A A; Petersen, Rasmus S; Swan, Daniel C; Panzeri, Stefano

    2009-01-01

    Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources.

  10. Cost analysis and outcomes of simple elbow dislocations

    PubMed Central

    Panteli, Michalis; Pountos, Ippokratis; Kanakaris, Nikolaos K; Tosounidis, Theodoros H; Giannoudis, Peter V

    2015-01-01

    AIM: To evaluate the management, clinical outcome and cost implications of three different treatment regimes for simple elbow dislocations. METHODS: Following institutional board approval, we performed a retrospective review of all consecutive patients treated for simple elbow dislocations in a Level I trauma centre between January 2008 and December 2010. Based on the length of elbow immobilisation (LOI), patients were divided in three groups (Group I, < 2 wk; Group II, 2-3 wk; and Group III, > 3 wk). Outcome was considered satisfactory when a patient could achieve a pain-free range of motion ≥ 100° (from 30° to 130°). The associated direct medical costs for the treatment of each patient were then calculated and analysed. RESULTS: We identified 80 patients who met the inclusion criteria. Due to loss to follow up, 13 patients were excluded from further analysis, leaving 67 patients for the final analysis. The mean LOI was 14 d (median 15 d; range 3-43 d) with a mean duration of hospital engagement of 67 d (median 57 d; range 10-351 d). Group III (prolonged immobilisation) had a statistically significant worse outcome in comparison to Group I and II (P = 0.04 and P = 0.01 respectively); however, there was no significant difference in the outcome between groups I and II (P = 0.30). No statistically significant difference in the direct medical costs between the groups was identified. CONCLUSION: The length of elbow immobilization doesn’t influence the medical cost; however immobilisation longer than three weeks is associated with persistent stiffness and a less satisfactory clinical outcome. PMID:26301180

  11. A Theoretical Analysis of Why Hybrid Ensembles Work

    PubMed Central

    2017-01-01

    Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296

  12. Theoretical and experimental analysis of modern zoom lens design

    NASA Astrophysics Data System (ADS)

    Wang, Xiangyang; Liu, Weilin

    2017-02-01

    The need for stability of aberration and correction of images for a zoom lens system should be considered during zooming process. Our work presents detailed theoretical and experimental analysis of multiple moving zoom optical systems. In our work we propose methods to determine the basic parameters of such optical system, the focal lengths of each element of the objective lens and their mutual axial separation. Introduce two different image stability equation and cam curve design method to calculate basic parameters. This type of optical system is widely spread in practice mainly in the field of photographic lenses and in surveying instruments (theodolites, leveling instruments, etc.). Furthermore, the detailed analysis of aberration properties of such optical systems is performed and methods for measuring the focal lengths of individual elements and their mutual distance without the need for disassembling the investigated optical system are presented. Finally according to theoretical and experimental analysis of zoom lens system, a zoom optical system with effective focal length 27-220mm has been design, the first element of such system is fixed, and the other groups can move during zoom process to get a continuity consecutiveness effective focal length (EFL). Using the powerful optimization capabilities of optical design software CODE V; we get the imaging quality analysis such as the modulation transfer function (MTF) etc.

  13. Simple Rules, Not So Simple: The Use of International Ovarian Tumor Analysis (IOTA) Terminology and Simple Rules in Inexperienced Hands in a Prospective Multicenter Cohort Study.

    PubMed

    Meys, Evelyne; Rutten, Iris; Kruitwagen, Roy; Slangen, Brigitte; Lambrechts, Sandrina; Mertens, Helen; Nolting, Ernst; Boskamp, Dieuwke; Van Gorp, Toon

    2017-08-23

    Objectives To analyze how well untrained examiners - without experience in the use of International Ovarian Tumor Analysis (IOTA) terminology or simple ultrasound-based rules (simple rules) - are able to apply IOTA terminology and simple rules and to assess the level of agreement between non-experts and an expert. Methods This prospective multicenter cohort study enrolled women with ovarian masses. Ultrasound was performed by non-expert examiners and an expert. Ultrasound features were recorded using IOTA nomenclature, and used for classifying the mass by simple rules. Interobserver agreement was evaluated with Fleiss' kappa and percentage agreement between observers. Results 50 consecutive women were included. We observed 46 discrepancies in the description of ovarian masses when non-experts utilized IOTA terminology. Tumor type was misclassified often (n = 22), resulting in poor interobserver agreement between the non-experts and the expert (kappa = 0.39, 95 %-CI 0.244 - 0.529, percentage of agreement = 52.0 %). Misinterpretation of simple rules by non-experts was observed 57 times, resulting in an erroneous diagnosis in 15 patients (30 %). The agreement for classifying the mass as benign, malignant or inconclusive by simple rules was only moderate between the non-experts and the expert (kappa = 0.50, 95 %-CI 0.300 - 0.704, percentage of agreement = 70.0 %). The level of agreement for all 10 simple rules features varied greatly (kappa index range: -0.08 - 0.74, percentage of agreement 66 - 94 %). Conclusion Although simple rules are useful to distinguish benign from malignant adnexal masses, they are not that simple for untrained examiners. Training with both IOTA terminology and simple rules is necessary before simple rules can be introduced into guidelines and daily clinical practice. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Theoretical analysis of wave impact forces on platform deck structures

    SciTech Connect

    Kaplan, P.; Murray, J.J.; Yu, W.C.

    1995-12-31

    A description is given of the theoretical analysis procedures used to predict the wave impact forces acting on offshore platform deck structures in large incident waves. Both vertical and horizontal plane forces are considered, in terms of the different type elements that make up such structures and the type of hydrodynamic force mathematical models used to represent the basic forces. Effects of wave surface nonlinearity (including kinematics), deck material porosity, and velocity blockage and shielding are considered in the analysis, which also includes a physical explanation of various observed phenomena. Results of comparison and correlation with experimental model test data are presented, including description of procedures used in data analysis to eliminate extraneous dynamic effects that often contaminate such data. The influence of wave heading angle relative to different structural elements (and overall structures) is also described, including both analytical representations and physical interpretations.

  15. Theoretical analysis of quantum ghost imaging through turbulence

    SciTech Connect

    Chan, Kam Wai Clifford; Simon, D. S.; Sergienko, A. V.; Hardy, Nicholas D.; Shapiro, Jeffrey H.; Dixon, P. Ben; Howland, Gregory A.; Howell, John C.; Eberly, Joseph H.; O'Sullivan, Malcolm N.; Rodenburg, Brandon; Boyd, Robert W.

    2011-10-15

    Atmospheric turbulence generally affects the resolution and visibility of an image in long-distance imaging. In a recent quantum ghost imaging experiment [P. B. Dixon et al., Phys. Rev. A 83, 051803 (2011)], it was found that the effect of the turbulence can nevertheless be mitigated under certain conditions. This paper gives a detailed theoretical analysis to the setup and results reported in the experiment. Entangled photons with a finite correlation area and a turbulence model beyond the phase screen approximation are considered.

  16. Theoretical analysis of transcription process with polymerase stalling

    NASA Astrophysics Data System (ADS)

    Li, Jingwei; Zhang, Yunxin

    2015-05-01

    Experimental evidence shows that in gene transcription RNA polymerase has the possibility to be stalled at a certain position of the transcription template. This may be due to the template damage or protein barriers. Once stalled, polymerase may backtrack along the template to the previous nucleotide to wait for the repair of the damaged site, simply bypass the barrier or damaged site and consequently synthesize an incorrect messenger RNA, or degrade and detach from the template. Thus, the effective transcription rate (the rate to synthesize correct product mRNA) and the transcription effectiveness (the ratio of the effective transcription rate to the effective transcription initiation rate) are both influenced by polymerase stalling events. So far, no theoretical model has been given to discuss the gene transcription process including polymerase stalling. In this study, based on the totally asymmetric simple exclusion process, the transcription process including polymerase stalling is analyzed theoretically. The dependence of the effective transcription rate, effective transcription initiation rate, and transcription effectiveness on the transcription initiation rate, termination rate, as well as the backtracking rate, bypass rate, and detachment (degradation) rate when stalling, are discussed in detail. The results showed that backtracking restart after polymerase stalling is an ideal mechanism to increase both the effective transcription rate and the transcription effectiveness. Without backtracking, detachment of stalled polymerase can also help to increase the effective transcription rate and transcription effectiveness. Generally, the increase of the bypass rate of the stalled polymerase will lead to the decrease of the effective transcription rate and transcription effectiveness. However, when both detachment rate and backtracking rate of the stalled polymerase vanish, the effective transcription rate may also be increased by the bypass mechanism.

  17. Theoretical analysis of transcription process with polymerase stalling.

    PubMed

    Li, Jingwei; Zhang, Yunxin

    2015-05-01

    Experimental evidence shows that in gene transcription RNA polymerase has the possibility to be stalled at a certain position of the transcription template. This may be due to the template damage or protein barriers. Once stalled, polymerase may backtrack along the template to the previous nucleotide to wait for the repair of the damaged site, simply bypass the barrier or damaged site and consequently synthesize an incorrect messenger RNA, or degrade and detach from the template. Thus, the effective transcription rate (the rate to synthesize correct product mRNA) and the transcription effectiveness (the ratio of the effective transcription rate to the effective transcription initiation rate) are both influenced by polymerase stalling events. So far, no theoretical model has been given to discuss the gene transcription process including polymerase stalling. In this study, based on the totally asymmetric simple exclusion process, the transcription process including polymerase stalling is analyzed theoretically. The dependence of the effective transcription rate, effective transcription initiation rate, and transcription effectiveness on the transcription initiation rate, termination rate, as well as the backtracking rate, bypass rate, and detachment (degradation) rate when stalling, are discussed in detail. The results showed that backtracking restart after polymerase stalling is an ideal mechanism to increase both the effective transcription rate and the transcription effectiveness. Without backtracking, detachment of stalled polymerase can also help to increase the effective transcription rate and transcription effectiveness. Generally, the increase of the bypass rate of the stalled polymerase will lead to the decrease of the effective transcription rate and transcription effectiveness. However, when both detachment rate and backtracking rate of the stalled polymerase vanish, the effective transcription rate may also be increased by the bypass mechanism.

  18. Isolation of exosomes by differential centrifugation: Theoretical analysis of a commonly used protocol

    NASA Astrophysics Data System (ADS)

    Livshts, Mikhail A.; Khomyakova, Elena; Evtushenko, Evgeniy G.; Lazarev, Vassili N.; Kulemin, Nikolay A.; Semina, Svetlana E.; Generozov, Edward V.; Govorun, Vadim M.

    2015-11-01

    Exosomes, small (40-100 nm) extracellular membranous vesicles, attract enormous research interest because they are carriers of disease markers and a prospective delivery system for therapeutic agents. Differential centrifugation, the prevalent method of exosome isolation, frequently produces dissimilar and improper results because of the faulty practice of using a common centrifugation protocol with different rotors. Moreover, as recommended by suppliers, adjusting the centrifugation duration according to rotor K-factors does not work for “fixed-angle” rotors. For both types of rotors - “swinging bucket” and “fixed-angle” - we express the theoretically expected proportion of pelleted vesicles of a given size and the “cut-off” size of completely sedimented vesicles as dependent on the centrifugation force and duration and the sedimentation path-lengths. The proper centrifugation conditions can be selected using relatively simple theoretical estimates of the “cut-off” sizes of vesicles. Experimental verification on exosomes isolated from HT29 cell culture supernatant confirmed the main theoretical statements. Measured by the nanoparticle tracking analysis (NTA) technique, the concentration and size distribution of the vesicles after centrifugation agree with those theoretically expected. To simplify this “cut-off”-size-based adjustment of centrifugation protocol for any rotor, we developed a web-calculator.

  19. Isolation of exosomes by differential centrifugation: Theoretical analysis of a commonly used protocol

    PubMed Central

    Livshts, Mikhail A.; Khomyakova, Elena; Evtushenko, Evgeniy G.; Lazarev, Vassili N.; Kulemin, Nikolay A.; Semina, Svetlana E.; Generozov, Edward V.; Govorun, Vadim M.

    2015-01-01

    Exosomes, small (40–100 nm) extracellular membranous vesicles, attract enormous research interest because they are carriers of disease markers and a prospective delivery system for therapeutic agents. Differential centrifugation, the prevalent method of exosome isolation, frequently produces dissimilar and improper results because of the faulty practice of using a common centrifugation protocol with different rotors. Moreover, as recommended by suppliers, adjusting the centrifugation duration according to rotor K-factors does not work for “fixed-angle” rotors. For both types of rotors – “swinging bucket” and “fixed-angle” – we express the theoretically expected proportion of pelleted vesicles of a given size and the “cut-off” size of completely sedimented vesicles as dependent on the centrifugation force and duration and the sedimentation path-lengths. The proper centrifugation conditions can be selected using relatively simple theoretical estimates of the “cut-off” sizes of vesicles. Experimental verification on exosomes isolated from HT29 cell culture supernatant confirmed the main theoretical statements. Measured by the nanoparticle tracking analysis (NTA) technique, the concentration and size distribution of the vesicles after centrifugation agree with those theoretically expected. To simplify this “cut-off”-size-based adjustment of centrifugation protocol for any rotor, we developed a web-calculator. PMID:26616523

  20. A simple and inexpensive device for biofilm analysis.

    PubMed

    Almshawit, Hala; Macreadie, Ian; Grando, Danilla

    2014-03-01

    The Calgary Biofilm Device (CBD) has been described as a technology for the rapid and reproducible assay of biofilm susceptibilities to antibiotics. In this study a simple and inexpensive alternative to the CBD was developed from polypropylene (PP) microcentrifuge tubes and pipette tip boxes. The utility of the device was demonstrated using Candida glabrata, a yeast that can develop antimicrobial-resistant biofilm communities. Biofilms of C. glabrata were formed on the outside surface of microcentrifuge tubes and examined by quantitative analysis and scanning electron microscopy. Growth of three C. glabrata strains, including a clinical isolate, demonstrated that biofilms could be formed on the microcentrifuge tubes. After 24 h incubation the three C. glabrata strains produced biofilms that were recovered into cell suspension and quantified. The method was found to produce uniform and reproducible results with no significant differences between biofilms formed on PP tubes incubated in various compartments of the device. In addition, the difference between maximum and minimum counts for each strain was comparable to those which have been reported for the CBD device. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Global analysis of a "simple" proteome : methanococcus jannaschii.

    SciTech Connect

    Giometti, C. S.; Reich, C.; Tollaksen, S.; Babnigg, G.; Lim, H.; Zhu, W.; Olsen, G.; Biosciences Division; Univ. of Illinois; Scripps Inst.

    2002-12-25

    The completed genome of Methanococcus jannaschii, including the main chromosome and two extra-chromosomal elements, predicts a proteome comprised of 1783 proteins. How many of those proteins are expressed at any given time and the relative abundance of the expressed proteins, however, cannot be predicted solely from the genome sequence. Two-dimensional gel electrophoresis coupled with peptide mass spectrometry is being used to identify the proteins expressed by M. jannaschii cells grown under different conditions as part of an effort to correlate protein expression with regulatory mechanisms. Here we describe the identification of 170 of the most abundant proteins found in total lysates of M. jannaschii grown under optimal fermentation conditions. To optimize the number of proteins detected, two different protein specific stains (Coomassie Blue R250 or silver nitrate) and two different first dimension separation methods (isoelectric focusing or nonequilibrium pH gradient electrophoresis) were used. Thirty-two percent of the proteins identified are annotated as hypothetical (21% conserved hypothetical and 11% hypothetical), 21% are enzymes involved in energy metabolism, 12% are proteins required for protein synthesis, and the remainder include proteins necessary for intermediary metabolism, cell division, and cell structure. Evidence of post-translational modification of numerous M. jannaschii proteins has been found, as well as indications of incomplete dissociation of protein-protein complexes. These results demonstrate the complexity of proteome analysis even when dealing with a relatively simple genome.

  2. Global analysis of a 'simple' proteome : methanoccus jannaschii.

    SciTech Connect

    Giometti, C. S.; Reich, C.; Tollaksen, S.; Babnigg, G.; Lim, H.; Zhu, W.; Yates, J., III; Olsen, G.; Biosciences Division; Univ. of Illinois; The Scripps Inst.

    2002-12-25

    The completed genome of Methanococcus jannaschii, including the main chromosome and two extra-chromosomal elements, predicts a proteome comprised of 1783 proteins. How many of those proteins are expressed at any given time and the relative abundance of the expressed proteins, however, cannot be predicted solely from the genome sequence. Two-dimensional gel electrophoresis coupled with peptide mass spectrometry is being used to identify the proteins expressed by M. jannaschii cells grown under different conditions as part of an effort to correlate protein expression with regulatory mechanisms. Here we describe the identification of 170 of the most abundant proteins found in total lysates of M. jannaschii grown under optimal fermentation conditions. To optimize the number of proteins detected, two different protein specific stains (Coomassie Blue R250 or silver nitrate) and two different first dimension separation methods (isoelectric focusing or nonequilibrium pH gradient electrophoresis) were used. Thirty-two percent of the proteins identified are annotated as hypothetical (21% conserved hypothetical and 11% hypothetical), 21% are enzymes involved in energy metabolism, 12% are proteins required for protein synthesis, and the remainder include proteins necessary for intermediary metabolism, cell division, and cell structure. Evidence of post-translational modification of numerous M. jannaschii proteins has been found, as well as indications of incomplete dissociation of protein-protein complexes. These results demonstrate the complexity of proteome analysis even when dealing with a relatively simple genome.

  3. A P-value model for theoretical power analysis and its applications in multiple testing procedures.

    PubMed

    Zhang, Fengqing; Gou, Jiangtao

    2016-10-10

    Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F) to distributions of corresponding p-values. We then use a step function to approximate each of the p-value's distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  4. Theoretical analysis of dynamic processes for interacting molecular motors

    NASA Astrophysics Data System (ADS)

    Teimouri, Hamid; Kolomeisky, Anatoly B.; Mehrabiani, Kareem

    2015-02-01

    Biological transport is supported by the collective dynamics of enzymatic molecules that are called motor proteins or molecular motors. Experiments suggest that motor proteins interact locally via short-range potentials. We investigate the fundamental role of these interactions by carrying out an analysis of a new class of totally asymmetric exclusion processes, in which interactions are accounted for in a thermodynamically consistent fashion. This allows us to explicitly connect microscopic features of motor proteins with their collective dynamic properties. A theoretical analysis that combines various mean-field calculations and computer simulations suggests that the dynamic properties of molecular motors strongly depend on the interactions, and that the correlations are stronger for interacting motor proteins. Surprisingly, it is found that there is an optimal strength of interactions (weak repulsion) that leads to a maximal particle flux. It is also argued that molecular motor transport is more sensitive to attractive interactions. Applications of these results for kinesin motor proteins are discussed.

  5. Evolution Analysis of Simple Sequence Repeats in Plant Genome.

    PubMed

    Qin, Zhen; Wang, Yanping; Wang, Qingmei; Li, Aixian; Hou, Fuyun; Zhang, Liming

    2015-01-01

    Simple sequence repeats (SSRs) are widespread units on genome sequences, and play many important roles in plants. In order to reveal the evolution of plant genomes, we investigated the evolutionary regularities of SSRs during the evolution of plant species and the plant kingdom by analysis of twelve sequenced plant genome sequences. First, in the twelve studied plant genomes, the main SSRs were those which contain repeats of 1-3 nucleotides combination. Second, in mononucleotide SSRs, the A/T percentage gradually increased along with the evolution of plants (except for P. patens). With the increase of SSRs repeat number the percentage of A/T in C. reinhardtii had no significant change, while the percentage of A/T in terrestrial plants species gradually declined. Third, in dinucleotide SSRs, the percentage of AT/TA increased along with the evolution of plant kingdom and the repeat number increased in terrestrial plants species. This trend was more obvious in dicotyledon than monocotyledon. The percentage of CG/GC showed the opposite pattern to the AT/TA. Forth, in trinucleotide SSRs, the percentages of combinations including two or three A/T were in a rising trend along with the evolution of plant kingdom; meanwhile with the increase of SSRs repeat number in plants species, different species chose different combinations as dominant SSRs. SSRs in C. reinhardtii, P. patens, Z. mays and A. thaliana showed their specific patterns related to evolutionary position or specific changes of genome sequences. The results showed that, SSRs not only had the general pattern in the evolution of plant kingdom, but also were associated with the evolution of the specific genome sequence. The study of the evolutionary regularities of SSRs provided new insights for the analysis of the plant genome evolution.

  6. Inner strength--a theoretical analysis of salutogenic concepts.

    PubMed

    Lundman, Berit; Aléx, Lena; Jonsén, Elisabeth; Norberg, Astrid; Nygren, Björn; Santamäki Fischer, Regina; Strandberg, Gunilla

    2010-02-01

    Theoretical and empirical overlaps between the concepts of resilience, sense of coherence, hardiness, purpose in life, and self-transcendence have earlier been described as some kind of inner strength, but no studies have been found that focus on what attributes these concepts have in common. The objective of this study was to perform a theoretical analysis of the concepts of resilience, sense of coherence, hardiness, purpose in life, and self-transcendence, in order to identify their core dimensions in an attempt to get an overarching understanding of inner strength. PRINT METHOD: An analysis inspired by the procedure of meta-theory construction was performed. The main questions underlying the development of the concepts, the major paradigms and the most prominent assumptions, the critical attributes and the characteristics of the various concepts were identified. The analysis resulted in the identification of four core dimensions of inner strength and the understanding that inner strength relies on the interaction of these dimensions: connectedness, firmness, flexibility, and creativity. These dimensions were validated through comparison with the original descriptions of the concepts. An overarching understanding of inner strength is that it means both to stand steady, to be firm, with both feet on the ground and to be connected to; family, friends, society, nature and spiritual dimensions and to be able to transcend. Having inner strength is to be creative and stretchable, which is to believe in own possibilities to act and to make choices and influence life's trajectory in a perceived meaningful direction. Inner strength is to shoulder responsibility for oneself and others, to endure and deal with difficulties and adversities. This knowledge about inner strength will raise the awareness of the concept and, in turn, hopefully increase our potential to support people's inner strength. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Analysis of the theoretical bias in dark matter direct detection

    SciTech Connect

    Catena, Riccardo

    2014-09-01

    Fitting the model ''A'' to dark matter direct detection data, when the model that underlies the data is ''B'', introduces a theoretical bias in the fit. We perform a quantitative study of the theoretical bias in dark matter direct detection, with a focus on assumptions regarding the dark matter interactions, and velocity distribution. We address this problem within the effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle. We analyze 24 benchmark points in the parameter space of the theory, using frequentist and Bayesian statistical methods. First, we simulate the data of future direct detection experiments assuming a momentum/velocity dependent dark matter-nucleon interaction, and an anisotropic dark matter velocity distribution. Then, we fit a constant scattering cross section, and an isotropic Maxwell-Boltzmann velocity distribution to the simulated data, thereby introducing a bias in the analysis. The best fit values of the dark matter particle mass differ from their benchmark values up to 2 standard deviations. The best fit values of the dark matter-nucleon coupling constant differ from their benchmark values up to several standard deviations. We conclude that common assumptions in dark matter direct detection are a source of potentially significant bias.

  8. Extracellular measurement of anisotropic bidomain myocardial conductivities. I. Theoretical analysis.

    PubMed

    Le Guyader, P; Trelles, F; Savard, P

    2001-10-01

    The passive electrical properties of cardiac tissue, such as the intracellular and interstitial conductivities along the longitudinal and transverse axes, have not been often measured because intracellular electrodes are usually needed for these measurements. In this paper, we present a theoretical analysis of two myocardial models developed to estimate these properties by analyzing potentials recorded with a pair of extracellular electrodes while injecting alternating current between another pair of electrodes. First, the cardiac tissue is represented by a standard bidomain model which includes a membrane capacitance; second, this model is modified by adding an intracellular capacitance representing the intercalated disks. Numerical solutions are computed with a fast Fourier transform algorithm without constraining the anisotropy ratios of the interstitial and intracellular domains. We systematically investigate the effects of changes in the bidomain parameters on the voltage-to-current ratio curves. We also demonstrate how the bidomain parameters can be theoretically estimated by fitting, with a modified Shor's r algorithm, the simulated potentials along the longitudinal and transverse axes for different frequencies between 10 and 10,000 Hz. An important finding is that the interelectrode distance must be similar to the myocardial space constant so as to obtain frequency dependent measurements.

  9. Theoretical analysis of hot electron dynamics in nanorods

    PubMed Central

    Kumarasinghe, Chathurangi S.; Premaratne, Malin; Agrawal, Govind P.

    2015-01-01

    Localised surface plasmons create a non-equilibrium high-energy electron gas in nanostructures that can be injected into other media in energy harvesting applications. Here, we derive the rate of this localised-surface-plasmon mediated generation of hot electrons in nanorods and the rate of injecting them into other media by considering quantum mechanical motion of the electron gas. Specifically, we use the single-electron wave function of a particle in a cylindrical potential well and the electric field enhancement factor of an elongated ellipsoid to derive the energy distribution of electrons after plasmon excitation. We compare the performance of nanorods with equivolume nanoparticles of other shapes such as nanospheres and nanopallets and report that nanorods exhibit significantly better performance over a broad spectrum. We present a comprehensive theoretical analysis of how different parameters contribute to efficiency of hot-electron harvesting in nanorods and reveal that increasing the aspect ratio can increase the hot-electron generation and injection, but the volume shows an inverse dependency when efficiency per unit volume is considered. Further, the electron thermalisation time shows much less influence on the injection rate. Our derivations and results provide the much needed theoretical insight for optimization of hot-electron harvesting process in highly adaptable metallic nanorods. PMID:26202823

  10. Tetrad Analysis: A Practical Demonstration Using Simple Models.

    ERIC Educational Resources Information Center

    Gow, Mary M.; Nicholl, Desmond S. T.

    1988-01-01

    Uses simple models to illustrate the principles of this genetic method of mapping gene loci. Stresses that this system enables a practical approach to be used with students who experience difficulty in understanding the concepts involved. (CW)

  11. Tetrad Analysis: A Practical Demonstration Using Simple Models.

    ERIC Educational Resources Information Center

    Gow, Mary M.; Nicholl, Desmond S. T.

    1988-01-01

    Uses simple models to illustrate the principles of this genetic method of mapping gene loci. Stresses that this system enables a practical approach to be used with students who experience difficulty in understanding the concepts involved. (CW)

  12. Simple models for quorum sensing: Nonlinear dynamical analysis

    NASA Astrophysics Data System (ADS)

    Chiang, Wei-Yin; Li, Yue-Xian; Lai, Pik-Yin

    2011-10-01

    Quorum sensing refers to the change in the cooperative behavior of a collection of elements in response to the change in their population size or density. This behavior can be observed in chemical and biological systems. These elements or cells are coupled via chemicals in the surrounding environment. Here we focus on the change of dynamical behavior, in particular from quiescent to oscillatory, as the cell population changes. For instance, the silent behavior of the elements can become oscillatory as the system concentration or population increases. In this work, two simple models are constructed that can produce the essential representative properties in quorum sensing. The first is an excitable or oscillatory phase model, which is probably the simplest model one can construct to describe quorum sensing. Using the mean-field approximation, the parameter regime for quorum sensing behavior can be identified, and analytical results for the detailed dynamical properties, including the phase diagrams, are obtained and verified numerically. The second model consists of FitzHugh-Nagumo elements coupled to the signaling chemicals in the environment. Nonlinear dynamical analysis of this mean-field model exhibits rich dynamical behaviors, such as infinite period bifurcation, supercritical Hopf, fold bifurcation, and subcritical Hopf bifurcations as the population parameter changes for different coupling strengths. Analytical result is obtained for the Hopf bifurcation phase boundary. Furthermore, two elements coupled via the environment and their synchronization behavior for these two models are also investigated. For both models, it is found that the onset of oscillations is accompanied by the synchronized dynamics of the two elements. Possible applications and extension of these models are also discussed.

  13. First- and second-order polarizabilities of simple merocyanines. An experimental and theoretical reassessment of the two-level model.

    PubMed

    Momicchioli, Fabio; Ponterini, Glauco; Vanossi, Davide

    2008-11-20

    Taking four merocyanines [(CH3)2N-(CHCH)n-C(CH 3)O; n = 1-4] (Mc1-4) as test D-A systems, we performed a close experimental and theoretical examination of the two level model with reference to its ability to provide correct predictions of both absolute values and dependence on the conjugation path length of first- and second-order molecular polarizabilities. By (1)H NMR spectroscopy merocyanines Mc1-4 were found to be approximately 1:1 mixtures of two planar conformers with cis and trans arrangements of the C(CH 3)O electron-acceptor group and all trans structure of the polyene like fragment. The degree of bond length alternancy (BLA) in the -(CHCH)n- fragment, was quantified by extensive full geometry optimizations at both semiempirical and ab initio level. DFT (6-31G**/B3LYP) optimized geometries were considered to be most reliable and were used for calculations of the excited-state properties. The applicability of the two level model, reducing the general sum-over-states (SOS) expansion to only one term involving the ground state (g) and the lowest-lying (1)(pipi*) CT state (e), was checked by analysis of fluorescence and near UV absorption spectra. Measurements of the basic two-level model quantities ( Ege, microge and Deltamicro(eg)), by which the dominant components of alpha and beta tensors are expressed (alpha XX , beta XXX , X identical with long molecular axis), were designed to give approximate free-molecule values. It is proposed, in particular, an adjustment of the solvatochromic method for the determination of Deltamicro(eg), based on accurate measurements of absorption spectral shifts in n-hexane/diethyl ether mixtures with small diethyl ether volume fractions. Such an approach led to Mc1-4 beta XXX 's matching well in both value and n-dependence with EFISH data reported in the literature for similar merocyanines. For the fluorescent Mc4, the results were qualitatively well reproduced by an approach, which combines absorption and fluorescence solvent

  14. A theoretical analysis of vacuum arc thruster performance

    NASA Technical Reports Server (NTRS)

    Polk, James E.; Sekerak, Mike; Ziemer, John K.; Schein, Jochen; Qi, Niansheng; Binder, Robert; Anders, Andre

    2001-01-01

    In vacuum arc discharges the current is conducted through vapor evaporated from the cathode surface. In these devices very dense, highly ionized plasmas can be created from any metallic or conducting solid used as the cathode. This paper describes theoretical models of performance for several thruster configurations which use vacuum arc plasma sources. This analysis suggests that thrusters using vacuum arc sources can be operated efficiently with a range of propellant options that gives great flexibility in specific impulse. In addition, the efficiency of plasma production in these devices appears to be largely independent of scale because the metal vapor is ionized within a few microns of the cathode electron emission sites, so this approach is well-suited for micropropulsion.

  15. Game-theoretic equilibrium analysis applications to deregulated electricity markets

    NASA Astrophysics Data System (ADS)

    Joung, Manho

    This dissertation examines game-theoretic equilibrium analysis applications to deregulated electricity markets. In particular, three specific applications are discussed: analyzing the competitive effects of ownership of financial transmission rights, developing a dynamic game model considering the ramp rate constraints of generators, and analyzing strategic behavior in electricity capacity markets. In the financial transmission right application, an investigation is made of how generators' ownership of financial transmission rights may influence the effects of the transmission lines on competition. In the second application, the ramp rate constraints of generators are explicitly modeled using a dynamic game framework, and the equilibrium is characterized as the Markov perfect equilibrium. Finally, the strategic behavior of market participants in electricity capacity markets is analyzed and it is shown that the market participants may exaggerate their available capacity in a Nash equilibrium. It is also shown that the more conservative the independent system operator's capacity procurement, the higher the risk of exaggerated capacity offers.

  16. Theoretical analysis of tsunami generation by pyroclastic flows

    USGS Publications Warehouse

    Watts, P.; Waythomas, C.F.

    2003-01-01

    Pyroclastic flows are a common product of explosive volcanism and have the potential to initiate tsunamis whenever thick, dense flows encounter bodies of water. We evaluate the process of tsunami generation by pyroclastic flow by decomposing the pyroclastic flow into two components, the dense underflow portion, which we term the pyroclastic debris flow, and the plume, which includes the surge and coignimbrite ash cloud parts of the flow. We consider five possible wave generation mechanisms. These mechanisms consist of steam explosion, pyroclastic debris flow, plume pressure, plume shear, and pressure impulse wave generation. Our theoretical analysis of tsunami generation by these mechanisms provides an estimate of tsunami features such as a characteristic wave amplitude and wavelength. We find that in most situations, tsunami generation is dominated by the pyroclastic debris flow component of a pyroclastic flow. This work presents information sufficient to construct tsunami sources for an arbitrary pyroclastic flow interacting with most bodies of water. Copyright 2003 by the American Geophysical Union.

  17. Theoretical Analysis of Dynamic Processes for Interacting Molecular Motors.

    PubMed

    Teimouri, Hamid; Kolomeisky, Anatoly B; Mehrabiani, Kareem

    2015-02-13

    Biological transport is supported by collective dynamics of enzymatic molecules that are called motor proteins or molecular motors. Experiments suggest that motor proteins interact locally via short-range potentials. We investigate the fundamental role of these interactions by analyzing a new class of totally asymmetric exclusion processes where interactions are accounted for in a thermodynamically consistent fashion. It allows us to connect explicitly microscopic features of motor proteins with their collective dynamic properties. Theoretical analysis that combines various mean-field calculations and computer simulations suggests that dynamic properties of molecular motors strongly depend on interactions, and correlations are stronger for interacting motor proteins. Surprisingly, it is found that there is an optimal strength of interactions (weak repulsion) that leads to a maximal particle flux. It is also argued that molecular motors transport is more sensitive to attractive interactions. Applications of these results for kinesin motor proteins are discussed.

  18. Self-mixing grating interferometer: theoretical analysis and experimental observations

    NASA Astrophysics Data System (ADS)

    Guo, Dongmei; Wang, Ming; Hao, Hui

    2016-08-01

    By combining self-mixing interferometer (SMI) and grating interferometer (GI), a self-mixing grating interferometer (SMGI) is proposed in this paper. Self-mixing interference occurs when light emitted from a laser diode is diffracted by the doublediffraction system and re-enters the laser active cavity, thus generating a modulation of both the amplitude and the frequency of the lasing field. Theoretical analysis and experimental observations show that the SMGI has the same phase sensitivity as that of the conventional GI and the direction of the phase movement can be obtained from inclination of the interference signal. Compared with the traditional SMI, the phase change of interference signal in SMGI is independent of laser wavelength, providing better immunity against environmental disturbances such as temperature, pressure, and humidity variation. Compared with the traditional GI, the SMGI provides a potential displacement sensor with directional discrimination and quite compact configuration.

  19. Deep and Structured Robust Information Theoretic Learning for Image Analysis.

    PubMed

    Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai

    2016-07-07

    This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.

  20. Two-dimensional electronic spectroscopy using incoherent light: theoretical analysis.

    PubMed

    Turner, Daniel B; Howey, Dylan J; Sutor, Erika J; Hendrickson, Rebecca A; Gealy, M W; Ulness, Darin J

    2013-07-25

    Electronic energy transfer in photosynthesis occurs over a range of time scales and under a variety of intermolecular coupling conditions. Recent work has shown that electronic coupling between chromophores can lead to coherent oscillations in two-dimensional electronic spectroscopy measurements of pigment-protein complexes measured with femtosecond laser pulses. A persistent issue in the field is to reconcile the results of measurements performed using femtosecond laser pulses with physiological illumination conditions. Noisy-light spectroscopy can begin to address this question. In this work we present the theoretical analysis of incoherent two-dimensional electronic spectroscopy, I((4)) 2D ES. Simulations reveal diagonal peaks, cross peaks, and coherent oscillations similar to those observed in femtosecond two-dimensional electronic spectroscopy experiments. The results also expose fundamental differences between the femtosecond-pulse and noisy-light techniques; the differences lead to new challenges and new opportunities.

  1. Theoretical analysis of sound transmission loss through graphene sheets

    SciTech Connect

    Natsuki, Toshiaki; Ni, Qing-Qing

    2014-11-17

    We examine the potential of using graphene sheets (GSs) as sound insulating materials that can be used for nano-devices because of their small size, super electronic, and mechanical properties. In this study, a theoretical analysis is proposed to predict the sound transmission loss through multi-layered GSs, which are formed by stacks of GS and bound together by van der Waals (vdW) forces between individual layers. The result shows that the resonant frequencies of the sound transmission loss occur in the multi-layered GSs and the values are very high. Based on the present analytical solution, we predict the acoustic insulation property for various layers of sheets under both normal incident wave and acoustic field of random incidence source. The scheme could be useful in vibration absorption application of nano devices and materials.

  2. How Do Pseudocapacitors Store Energy? Theoretical Analysis and Experimental Illustration.

    PubMed

    Costentin, Cyrille; Porter, Thomas R; Savéant, Jean-Michel

    2017-03-15

    Batteries and electrochemical double layer charging capacitors are two classical means of storing electrical energy. These two types of charge storage can be unambiguously distinguished from one another by the shape and scan-rate dependence of their cyclic voltammetric (CV) current-potential responses. The former shows peak-shaped current-potential responses, proportional to the scan rate v or to v(1/2), whereas the latter displays a quasi-rectangular response proportional to the scan rate. On the contrary, the notion of pseudocapacitance, popularized in the 1980s and 1990s for metal oxide systems, has been used to describe a charge storage process that is faradaic in nature yet displays capacitive CV signatures. It has been speculated that a quasi-rectangular CV response resembling that of a truly capacitive response arises from a series of faradaic redox couples with a distribution of potentials, yet this idea has never been justified theoretically. We address this problem by first showing theoretically that this distribution-of-potentials approach is closely equivalent to the more physically meaningful consideration of concentration-dependent activity coefficients resulting from interactions between reactants. The result of the ensuing analysis is that, in either case, the CV responses never yield a quasi-rectangular response ∝ ν, identical to that of double layer charging. Instead, broadened peak-shaped responses are obtained. It follows that whenever a quasi-rectangular CV response proportional to scan rate is observed, such reputed pseudocapacitive behaviors should in fact be ascribed to truly capacitive double layer charging. We compare these results qualitatively with pseudocapacitor reports taken from the literature, including the classic RuO2 and MnO2 examples, and we present a quantitative analysis with phosphate cobalt oxide films. Our conclusions do not invalidate the numerous experimental studies carried out under the pseudocapacitance banner but

  3. Theoretical performance analysis of multislice channelized Hotelling observers

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Platiša, Ljiljana; Philips, Wilfried

    2012-02-01

    Quality assessment of 3D medical images is becoming increasingly important, because of clinical practice rapidly moving in the direction of volumetric imaging. In a recent publication, three multi-slice channelized Hotelling observer (msCHO) models are presented for the task of detecting 3D signals in multi-slice images, where each multi-slice image is inspected in a so called stack-browsing mode. The observer models are based on the assumption that humans observe multi-slice images in a simple two stage process, and each of the models implement this principle in a different way. In this paper, we investigate the theoretical performance, in terms of detection signal-to-noise-ratio (SNR) of msCHO models, for the task of detecting a separable signal in a Gaussian background with separable covariance matrix. We find that, despite the differences in architecture of the three models, they all have the same asymptotical performance in this task (i.e., when the number of training images tends to infinity). On the other hand, when backgrounds with nonseparable covariance matrices are considered, the third model, msCHOc, is expected to perform slightly better than the other msCHO models (msCHOa and msCHOb), but only when sufficient training images are provided. These findings suggest that the choice between the msCHO models mainly depends on the experiment setup (e.g., the number of available training samples), while the relation to human observers depends on the particular choice of the "temporal" channels that the msCHO models use.

  4. Motility of a model bristle-bot: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Cicconofri, Giancarlo; DeSimone, Antonio

    2015-11-01

    Bristle-bots are legged robots that can be easily made out of a toothbrush head and a small vibrating engine. Despite their simple appearance, the mechanism enabling them to propel themselves by exploiting friction with the substrate is far from trivial. Numerical experiments on a model bristle-bot have been able to reproduce such a mechanism revealing, in addition, the ability to switch direction of motion by varying the vibration frequency. This paper provides a detailed account of these phenomena through a fully analytical treatment of the model. The equations of motion are solved through an expansion in terms of a properly chosen small parameter. The convergence of the expansion is rigorously proven. In addition, the analysis delivers formulas for the average velocity of the robot and for the frequency at which the direction switch takes place. A quantitative description of the mechanism for the friction modulation underlying the motility of the bristle-bot is also provided.

  5. Infrared and theoretical calculations in 2-halocycloheptanones conformational analysis.

    PubMed

    Rozada, Thiago C; Gauze, Gisele F; Favaro, Denize C; Rittner, Roberto; Basso, Ernani A

    2012-08-01

    2-Halocycloheptanones (Halo=F, Cl, Br and I) were synthesized and their conformational analysis was performed through infrared spectroscopy data. The corresponding conformers geometries and energies were obtained by theoretical calculations at B3LYP/aug-cc-pVDZ level of theory in the isolated state and in solution. It was observed, by both approaches, that the conformational preferences were very sensitive to the solvent polarity, since its increase led to an increase in the population of the more polar conformer. An analysis of these conformational equilibria showed they suffer also the influence of stereoelectronic effects, like hyperconjugation and steric effects. These results were interpreted using natural bond orbital (NBO) analysis, which indicated that the electronic delocalization to the orbital π*(C=O) is directly involved in the stability increase of conformers I and II. The relative effect of the period of the halogen can also be noted, with changes in the conformational preferences and in the energies involved in the interactions of NBO.

  6. Numerical Simulation of Isothermal Micro Flows by Lattice Boltzmann Method and Theoretical Analysis of the Diffuse Scattering Boundary Condition

    NASA Astrophysics Data System (ADS)

    Niu, X. D.; Shu, C.; Chew, Y. T.

    A Lattice Boltzmann model for simulating micro flows has been proposed by us recently (Europhysics Letters, 67(4), 600-606 (2004)). In this paper, we will present a further theoretical and numerical validation of the model. In this regards, a theoretical analysis of the diffuse-scattering boundary condition for a simple flow is carried out and the result is consistent with the conventional slip velocity boundary condition. Numerical validation is highlighted by simulating the two-dimensional isothermal pressure-driven micro-channel flows and the thin-film gas bearing lubrication problems, and comparing the simulation results with available experimental data and analytical predictions.

  7. Theoretical analysis of the state of balance in bipedal walking.

    PubMed

    Firmani, Flavio; Park, Edward J

    2013-04-01

    This paper presents a theoretical analysis based on classic mechanical principles of balance of forces in bipedal walking. Theories on the state of balance have been proposed in the area of humanoid robotics and although the laws of classical mechanics are equivalent to both humans and humanoid robots, the resulting motion obtained with these theories is unnatural when compared to normal human gait. Humanoid robots are commonly controlled using the zero moment point (ZMP) with the condition that the ZMP cannot exit the foot-support area. This condition is derived from a physical model in which the biped must always walk under dynamically balanced conditions, making the centre of pressure (CoP) and the ZMP always coincident. On the contrary, humans follow a different strategy characterized by a 'controlled fall' at the end of the swing phase. In this paper, we present a thorough theoretical analysis of the state of balance and show that the ZMP can exit the support area, and its location is representative of the imbalance state characterized by the separation between the ZMP and the CoP. Since humans exhibit this behavior, we also present proof-of-concept results of a single subject walking on an instrumented treadmill at different speeds (from slow 0.7 m/s to fast 2.0 m/s walking with increments of 0.1 m/s) with the motion recorded using an optical motion tracking system. In order to evaluate the experimental results of this model, the coefficient of determination (R2) is used to correlate the measured ground reaction forces and the resultant of inertial and gravitational forces (anteroposterior R² = 0.93, mediolateral R² = 0.89, and vertical R² = 0.86) indicating that there is a high correlation between the measurements. The results suggest that the subject exhibits a complete dynamically balanced gait during slow speeds while experiencing a controlled fall (end of swing phase) with faster speeds. This is quantified with the root-mean-square deviation (RMSD

  8. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  9. Reversal of Emergent Simple Discrimination in Children: A Component Analysis.

    ERIC Educational Resources Information Center

    Smeets, Paul M.; And Others

    1995-01-01

    Examined reversal of emergent simple discriminations through stimulus contiguity. In experiment one, Baseline and Reversal phases were positive for most children. Experiments two through four examined protocol aspects that possibly contributed to successful reversal of the form discrimination; found that reversed discrimination usually was a…

  10. Theoretical Analysis of the Electron Spiral Toroid Concept

    NASA Technical Reports Server (NTRS)

    Cambier, Jean-Luc; Micheletti, David A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This report describes the analysis of the Electron Spiral Toroid (EST) concept being promoted by Electron Power Systems Inc. (EPS). The EST is described as a toroidal plasma structure composed Of ion and electron shells. It is claimed that the EST requires little or no external confinement, despite the extraordinarily large energy densities resulting from the self-generating magnetic fields. The present analysis is based upon documentation made available by EPS, a previous description of the model by the Massachusetts Institute of Technology (MIT), and direct discussions with EPS and MIT. It is found that claims of absolute stability and large energy storage capacities of the EST concept have not been substantiated. Notably, it can be demonstrated that the ion fluid is fundamentally unstable. Although various scenarios for ion confinement were subsequently suggested by EPS and MIT, none were found to be plausible. Although the experimental data does not prove the existence of EST configurations, there is undeniable experimental evidence that some type of plasma structures whose characteristics remain to be determined are observed. However, more realistic theoretical models must first be developed to explain their existence and properties before applications of interest to NASA can he assessed and developed.

  11. Designing novel Sn-Bi, Si-C and Ge-C nanostructures, using simple theoretical chemical similarities

    PubMed Central

    2011-01-01

    A framework of simple, transparent and powerful concepts is presented which is based on isoelectronic (or isovalent) principles, analogies, regularities and similarities. These analogies could be considered as conceptual extensions of the periodical table of the elements, assuming that two atoms or molecules having the same number of valence electrons would be expected to have similar or homologous properties. In addition, such similar moieties should be able, in principle, to replace each other in more complex structures and nanocomposites. This is only partly true and only occurs under certain conditions which are investigated and reviewed here. When successful, these concepts are very powerful and transparent, leading to a large variety of nanomaterials based on Si and other group 14 elements, similar to well known and well studied analogous materials based on boron and carbon. Such nanomaterias designed in silico include, among many others, Si-C, Sn-Bi, Si-C and Ge-C clusters, rings, nanowheels, nanorodes, nanocages and multidecker sandwiches, as well as silicon planar rings and fullerenes similar to the analogous sp2 bonding carbon structures. It is shown that this pedagogically simple and transparent framework can lead to an endless variety of novel and functional nanomaterials with important potential applications in nanotechnology, nanomedicine and nanobiology. Some of the so called predicted structures have been already synthesized, not necessarily with the same rational and motivation. Finally, it is anticipated that such powerful and transparent rules and analogies, in addition to their predictive power, could also lead to far-reaching interpretations and a deeper understanding of already known results and information. PMID:21711875

  12. Designing novel Sn-Bi, Si-C and Ge-C nanostructures, using simple theoretical chemical similarities

    NASA Astrophysics Data System (ADS)

    Zdetsis, Aristides D.

    2011-04-01

    A framework of simple, transparent and powerful concepts is presented which is based on isoelectronic (or isovalent) principles, analogies, regularities and similarities. These analogies could be considered as conceptual extensions of the periodical table of the elements, assuming that two atoms or molecules having the same number of valence electrons would be expected to have similar or homologous properties. In addition, such similar moieties should be able, in principle, to replace each other in more complex structures and nanocomposites. This is only partly true and only occurs under certain conditions which are investigated and reviewed here. When successful, these concepts are very powerful and transparent, leading to a large variety of nanomaterials based on Si and other group 14 elements, similar to well known and well studied analogous materials based on boron and carbon. Such nanomaterias designed in silico include, among many others, Si-C, Sn-Bi, Si-C and Ge-C clusters, rings, nanowheels, nanorodes, nanocages and multidecker sandwiches, as well as silicon planar rings and fullerenes similar to the analogous sp2 bonding carbon structures. It is shown that this pedagogically simple and transparent framework can lead to an endless variety of novel and functional nanomaterials with important potential applications in nanotechnology, nanomedicine and nanobiology. Some of the so called predicted structures have been already synthesized, not necessarily with the same rational and motivation. Finally, it is anticipated that such powerful and transparent rules and analogies, in addition to their predictive power, could also lead to far-reaching interpretations and a deeper understanding of already known results and information.

  13. GRETNA: a graph theoretical network analysis toolbox for imaging connectomics

    PubMed Central

    Wang, Jinhui; Wang, Xindi; Xia, Mingrui; Liao, Xuhong; Evans, Alan; He, Yong

    2015-01-01

    Recent studies have suggested that the brain’s structural and functional networks (i.e., connectomics) can be constructed by various imaging technologies (e.g., EEG/MEG; structural, diffusion and functional MRI) and further characterized by graph theory. Given the huge complexity of network construction, analysis and statistics, toolboxes incorporating these functions are largely lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA) toolbox for imaging connectomics. The GRETNA contains several key features as follows: (i) an open-source, Matlab-based, cross-platform (Windows and UNIX OS) package with a graphical user interface (GUI); (ii) allowing topological analyses of global and local network properties with parallel computing ability, independent of imaging modality and species; (iii) providing flexible manipulations in several key steps during network construction and analysis, which include network node definition, network connectivity processing, network type selection and choice of thresholding procedure; (iv) allowing statistical comparisons of global, nodal and connectional network metrics and assessments of relationship between these network metrics and clinical or behavioral variables of interest; and (v) including functionality in image preprocessing and network construction based on resting-state functional MRI (R-fMRI) data. After applying the GRETNA to a publicly released R-fMRI dataset of 54 healthy young adults, we demonstrated that human brain functional networks exhibit efficient small-world, assortative, hierarchical and modular organizations and possess highly connected hubs and that these findings are robust against different analytical strategies. With these efforts, we anticipate that GRETNA will accelerate imaging connectomics in an easy, quick and flexible manner. GRETNA is freely available on the NITRC website.1 PMID:26175682

  14. A Model and Simple Iterative Algorithm For Redundancy Analysis.

    PubMed

    Fornell, C; Barclay, D W; Rhee, B D

    1988-07-01

    Stewart and Love proposed redundancy as an index for measuring the amount of shared variance between two sets of variables. van den Wollenberg presented a method for maximizing redundancy. Johansson extended the approach to include derivation of optimal Y-variates, given the X-variates. This paper shows that redundancy maximization with Johansson's extension can be accomplished via a simple iterative algorithm based on Wold's Partial Least Squares.

  15. Renormalization group analysis for an asymmetric simple exclusion process.

    PubMed

    Mukherji, Sutapa

    2017-03-01

    A perturbative renormalization group method is used to obtain steady-state density profiles of a totally asymmetric simple exclusion process with particle adsorption and evaporation. This method allows us to obtain a globally valid solution for the density profile without the asymptotic matching of bulk and boundary layer solutions. In addition, we show a nontrivial scaling of the boundary layer width with the system size close to specific phase boundaries.

  16. Advances in multiscale theoretical analysis and imaging aspects of turbulence

    NASA Astrophysics Data System (ADS)

    Shockro, Jennifer

    The work presented in this dissertation is focused on two aspects related to turbulent flow. The first of these is the one-dimensional theoretical analysis of the logarithmic spiral in terms of fractal dimension and spectrum. The second is on imaging methodologies and analysis of turbulent jet scalar interfaces in atmospheric conditions, with broad applicability to various studies where turbulence has a key role, such as urban contaminant dispersion or free space laser communications. The logarithmic spiral is of particular interest to studies of turbulence and natural phenomena as it appears frequently in nature with the "Golden Ratio" and is thought to play an important role in turbulent mixing. It is also an inherently anisotropic geometric structure and therefore provides information towards examining phenomena in which anisotropic properties might be expected to appear and is thought to be present as a structure within the fine scales of the turbulent hierarchy. In this work it is subjected to one-dimensional theoretical analysis, focusing on the development of a probability density function (pdf) for the spiral and the relation of the pdf to its fractal dimension. Results indicate that the logarithmic spiral does not have a constant fractal dimension and thus that it does not exhibit any form of self-similar statistical behavior, supporting previous theoretical suppositions about behavior at the fine scales within the turbulent hierarchy. A signal is developed from the pdf in order to evaluate its power spectrum. Results of this analysis provide information about the manner in which energy is carried at different scales of the spiral. To our knowledge, the logarithmic spiral in particular has not yet been examined in this fashion in literature. In order to further investigate this object, the multiscale minima meshless (M(3) ) method isextended and employed computationally to the two-dimensional logarithmic spiral as well as to experimental images of a

  17. Comprehensive analysis of Arabidopsis expression level polymorphisms with simple inheritance

    PubMed Central

    Plantegenet, Stephanie; Weber, Johann; Goldstein, Darlene R; Zeller, Georg; Nussbaumer, Cindy; Thomas, Jérôme; Weigel, Detlef; Harshman, Keith; Hardtke, Christian S

    2009-01-01

    In Arabidopsis thaliana, gene expression level polymorphisms (ELPs) between natural accessions that exhibit simple, single locus inheritance are promising quantitative trait locus (QTL) candidates to explain phenotypic variability. It is assumed that such ELPs overwhelmingly represent regulatory element polymorphisms. However, comprehensive genome-wide analyses linking expression level, regulatory sequence and gene structure variation are missing, preventing definite verification of this assumption. Here, we analyzed ELPs observed between the Eil-0 and Lc-0 accessions. Compared with non-variable controls, 5′ regulatory sequence variation in the corresponding genes is indeed increased. However, ∼42% of all the ELP genes also carry major transcription unit deletions in one parent as revealed by genome tiling arrays, representing a >4-fold enrichment over controls. Within the subset of ELPs with simple inheritance, this proportion is even higher and deletions are generally more severe. Similar results were obtained from analyses of the Bay-0 and Sha accessions, using alternative technical approaches. Collectively, our results suggest that drastic structural changes are a major cause for ELPs with simple inheritance, corroborating experimentally observed indel preponderance in cloned Arabidopsis QTL. PMID:19225455

  18. Optimal design of an activated sludge plant: theoretical analysis

    NASA Astrophysics Data System (ADS)

    Islam, M. A.; Amin, M. S. A.; Hoinkis, J.

    2013-06-01

    The design procedure of an activated sludge plant consisting of an activated sludge reactor and settling tank has been theoretically analyzed assuming that (1) the Monod equation completely describes the growth kinetics of microorganisms causing the degradation of biodegradable pollutants and (2) the settling characteristics are fully described by a power law. For a given reactor height, the design parameter of the reactor (reactor volume) is reduced to the reactor area. Then the sum total area of the reactor and the settling tank is expressed as a function of activated sludge concentration X and the recycled ratio α. A procedure has been developed to calculate X opt, for which the total required area of the plant is minimum for given microbiological system and recycled ratio. Mathematical relations have been derived to calculate the α-range in which X opt meets the requirements of F/ M ratio. Results of the analysis have been illustrated for varying X and α. Mathematical formulae have been proposed to recalculate the recycled ratio in the events, when the influent parameters differ from those assumed in the design.

  19. The graph theoretical analysis of the SSVEP harmonic response networks.

    PubMed

    Zhang, Yangsong; Guo, Daqing; Cheng, Kaiwen; Yao, Dezhong; Xu, Peng

    2015-06-01

    Steady-state visually evoked potentials (SSVEP) have been widely used in the neural engineering and cognitive neuroscience researches. Previous studies have indicated that the SSVEP fundamental frequency responses are correlated with the topological properties of the functional networks entrained by the periodic stimuli. Given the different spatial and functional roles of the fundamental frequency and harmonic responses, in this study we further investigated the relation between the harmonic responses and the corresponding functional networks, using the graph theoretical analysis. We found that the second harmonic responses were positively correlated to the mean functional connectivity, clustering coefficient, and global and local efficiencies, while negatively correlated with the characteristic path lengths of the corresponding networks. In addition, similar pattern occurred with the lowest stimulus frequency (6.25 Hz) at the third harmonic responses. These findings demonstrate that more efficient brain networks are related to larger SSVEP responses. Furthermore, we showed that the main connection pattern of the SSVEP harmonic response networks originates from the interactions between the frontal and parietal-occipital regions. Overall, this study may bring new insights into the understanding of the brain mechanisms underlying SSVEP.

  20. A novel theoretical approach to the analysis of dendritic transients.

    PubMed Central

    Agmon-Snir, H

    1995-01-01

    A novel theoretical framework for analyzing dendritic transients is introduced. This approach, called the method of moments, is an extension of Rall's cable theory for dendrites. It provides analytic investigation of voltage attenuation, signal delay, and synchronization problems in passive dendritic trees. In this method, the various moments of a transient signal are used to characterize the properties of the transient. The strength of the signal is measured by the time integral of the signal, its characteristic time is determined by its centroid ("center of gravity"), and the width of the signal is determined by a measure similar to the standard deviation in probability theory. Using these signal properties, the method of moments provides theorems, expressions, and efficient algorithms for analyzing the voltage response in arbitrary passive trees. The method yields new insights into spatiotemporal integration, coincidence detection mechanisms, and the properties of local interactions between synaptic inputs in dendritic trees. The method can also be used for matching dendritic neuron models to experimental data and for the analysis of synaptic inputs recorded experimentally. Images FIGURE 1 FIGURE 2 FIGURE 3 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 FIGURE 10 PMID:8580308

  1. GraTeLPy: graph-theoretic linear stability analysis

    PubMed Central

    2014-01-01

    Background A biochemical mechanism with mass action kinetics can be represented as a directed bipartite graph (bipartite digraph), and modeled by a system of differential equations. If the differential equations (DE) model can give rise to some instability such as multistability or Turing instability, then the bipartite digraph contains a structure referred to as a critical fragment. In some cases the existence of a critical fragment indicates that the DE model can display oscillations for some parameter values. We have implemented a graph-theoretic method that identifies the critical fragments of the bipartite digraph of a biochemical mechanism. Results GraTeLPy lists all critical fragments of the bipartite digraph of a given biochemical mechanism, thus enabling a preliminary analysis on the potential of a biochemical mechanism for some instability based on its topological structure. The correctness of the implementation is supported by multiple examples. The code is implemented in Python, relies on open software, and is available under the GNU General Public License. Conclusions GraTeLPy can be used by researchers to test large biochemical mechanisms with mass action kinetics for their capacity for multistability, oscillations and Turing instability. PMID:24572152

  2. PDT driven by energy-converting materials: a theoretical analysis

    NASA Astrophysics Data System (ADS)

    Finlay, Jarod C.

    2009-02-01

    Materials have been developed which absorb radiation of one energy and emit light of another. We present a theoretical analysis of the use of these materials as light sources for photodynamic therapy (PDT). The advantage of this strategy is that radiation of higher particle energy (e.g. x ray or electron beam) or lower photon energy (e.g. infra-red) may have more favorable penetration in tissue or more readily available radiation sources than the radiation absorbed by the sensetizer. Our analysis is based on the transfer of energy from radiation fields to visible light. We analyze two scenarios: PDT pumped by (1) infrared light in a two-photon process and (2) ionizing radiation. In each case, we assume that the converting material and the sensitizer are matched sufficiently that the transfer of energy between them is essentially lossless. For the infinite and semiinfinite geometries typically used in PDT, we calculate the resulting photodynamic dose distribution, and compare it to the dose distribution expected for conventional PDT. We also calculate the dose of the incident beam (ionizing or infrared radiation) required to produce PDT-induced tumoricidal effects, and evaluate the expected toxicity in surrounding normal tissue. The toxicity is assumed to arise from thermal effects and acute ionizing radiation effects, for infrared and ionizing radiation, respectively. Our results predict that ionizing radiation will produce dose-limiting toxicity in most conventional geometries as a result of the high toxicity per unit energy of ionizing radiation. For infrared radiation, we predict that the toxicity can be moderated by proper choice of sensitizer and irradiation geometry and fractionation.

  3. Theoretical analysis of magnetic field interactions with aortic blood flow

    SciTech Connect

    Kinouchi, Y.; Yamaguchi, H.; Tenforde, T.S.

    1996-04-01

    The flow of blood in the presence of a magnetic field gives rise to induced voltages in the major arteries of the central circulatory system. Under certain simplifying conditions, such as the assumption that the length of major arteries (e.g., the aorta) is infinite and that the vessel walls are not electrically conductive, the distribution of induced voltages and currents within these blood vessels can be calculated with reasonable precision. However, the propagation of magnetically induced voltages and currents from the aorta into neighboring tissue structures such as the sinuatrial node of the heart has not been previously determined by any experimental or theoretical technique. In the analysis presented in this paper, a solution of the complete Navier-Stokes equation was obtained by the finite element technique for blood flow through the ascending and descending aortic vessels in the presence of a uniform static magnetic field. Spatial distributions of the magnetically induced voltage and current were obtained for the aortic vessel and surrounding tissues under the assumption that the wall of the aorta is electrically conductive. Results are presented for the calculated values of magnetically induced voltages and current densities in the aorta and surrounding tissue structures, including the sinuatrial node, and for their field-strength dependence. In addition, an analysis is presented of magnetohydrodynamic interactions that lead to a small reduction of blood volume flow at high field levels above approximately 10 tesla (T). Quantitative results are presented on the offsetting effects of oppositely directed blood flows in the ascending and descending aortic segments, and a quantitative estimate is made of the effects of assuming an infinite vs. a finite length of the aortic vessel in calculating the magnetically induced voltage and current density distribution in tissue.

  4. A Boundedness Theoretical Analysis for GrADPDesign: A Case Study on Maze Navigation

    DTIC Science & Technology

    2015-08-17

    Analysis for GrADPDesign: A Case Study on Maze Navigation A new theoretical analysis towards the goal representation adaptive dynamic programming...ABSTRACT A Boundedness Theoretical Analysis for GrADPDesign: A Case Study on Maze Navigation Report Title A new theoretical analysis towards the goal...taken over a preset number (in this case study , we set as 10), we will randomly pick up a direction from the remaining choices as the final decision. We

  5. Electrochemical, spectroscopic and theoretical studies of a simple bifunctional cobalt corrole catalyst for oxygen evolution and hydrogen production.

    PubMed

    Lei, Haitao; Han, Ali; Li, Fengwang; Zhang, Meining; Han, Yongzhen; Du, Pingwu; Lai, Wenzhen; Cao, Rui

    2014-02-07

    Six cobalt and manganese corrole complexes were synthesized and examined as single-site catalysts for water splitting. The simple cobalt corrole [Co(tpfc)(py)2] (1, tpfc = 5,10,15-tris(pentafluorophenyl)corrole, py = pyridine) catalyzed both water oxidation and proton reduction efficiently. By coating complex 1 onto indium tin oxide (ITO) electrodes, the turnover frequency for electrocatalytic water oxidation was 0.20 s(−1) at 1.4 V (vs. Ag/AgCl, pH = 7), and it was 1010 s(−1) for proton reduction at −1.0 V (vs. Ag/AgCl, pH = 0.5). The stability of 1 for catalytic oxygen evolution and hydrogen production was evaluated by electrochemical, UV-vis and mass measurements, scanning electron microscope (SEM) and energy dispersive X-ray spectroscopy (EDX), which confirmed that 1 was the real molecular catalyst. Titration and UV-vis experiments showed that the pyridine group on Co dissociated at the beginning of catalysis, which was critical to subsequent activation of water. A proton-coupled electron transfer process was involved based on the pH dependence of the water oxidation reaction catalyzed by 1. As for manganese corroles 2–6, although their oxidizing powers were comparable to that of 1, they were not as stable as 1 and underwent decomposition at the electrode. Density functional theory (DFT) calculations indicated that water oxidation by 1 was feasible through a proposed catalytic cycle. The formation of an O–O bond was suggested to be the rate-determining step, and the calculated activation barrier of 18.1 kcal mol(−1) was in good agreement with that obtained from experiments.

  6. A simple method for HbF analysis.

    PubMed

    von Mandach, U; Tuchschmid, P; Huch, A; Huch, R

    1987-01-01

    Spectrophotometric methods with CO-oxymeters for measurements of carboxyhemoglobin and/or oxygen saturation in human blood include a systematic error depending on the percentage of fetal hemoglobin. It is of clinical importance to estimate the fetal hemoglobin to correct HbCO and SO2 values respectively. The described method is simple and less time consuming than conventional methods like HPLC or electrophoresis. The two measurements of oxy- and carboxyhemoglobin in the same blood sample with different oxygen saturation are needed for the estimation of the HbF and can be performed, including the deoxygenation procedure, in about 40 minutes.

  7. Simple exact analysis of the standardised mortality ratio.

    PubMed Central

    Liddell, F D

    1984-01-01

    The standardised mortality ratio is the ratio of deaths observed, D, to those expected, E, on the basis of the mortality rates of some reference population. On the usual assumptions--that D was generated by a Poisson process and that E is based on such large numbers that it can be taken as without error--the long established, but apparently little known, link between the Poisson and chi 2 distributions provides both an exact test of significance and expressions for obtaining exact (1-alpha) confidence limits on the SMR. When a table of the chi 2 distribution gives values for 1-1/2 alpha and 1/2 alpha with the required degrees of freedom, the procedures are not only precise but very simple. When the required values of chi 2 are not tabulated, only slightly less simple procedures are shown to be highly reliable for D greater than 5; they are more reliable for all D and alpha than even the best of three approximate methods. For small D, all approximations can be seriously unreliable. The exact procedures are therefore recommended for use wherever the basic assumptions (Poisson D and fixed E) apply. PMID:6707569

  8. [Simple renal cysts in children: analysis of surgical treatment].

    PubMed

    Vrublevskiĭ, S G; Kovarskiĭ, S L; Menovshchikova, L B; Korzinikova, I N; Vrublevskaia, E N; Al'-Mashat, N a; Poddubnyĭ, G S; Feoktistova, E V

    2008-01-01

    The results of solitary renal cysts (SRC) treatment in children have been analysed. Laparoscopic and puncture methods have been used. The algorithm of SRC patients' management is proposed. Ninety children with simple renal cysts aged 1 to 15 years were treated in Moscow city N.F. Filatov children's hospital N 13 in 1996-2007. The diagnosis was made basing on the findings of ultrasound investigation, computed tomography and radionuclide scintigraphy. Surgical treatment was applied only in case of large cysts (2.5 cm and more), connection with collective renal system and clinical manifestations. In small-size cysts and normal urodynamics the patients were followed up. Laparoscopic excision of the cyst was performed in 71 cases. The punctures were made under ultrasonic guidance with catchment or without it. The cyst content was aspirated with following sclerosing and administration of 96% ethanol into the cyst cavity. There were no postoperative complications. Good results (cyst disappearance) were achieved in 64 patients after laparoscopic and in 13 after paracentetic treatment; satisfactory results (cyst size reduction and relieve of clinical symptoms) were obtained in 6 children after laparoscopic and in 4 after percutaneous puncture. Laparoscopic treatment failed in 1 case of intraparenchymatous cyst which relapsed. It is recommended to begin treatment of simple renal cysts with use of the paracentetic method with catchment and following sclerosing proving positive results after original surgery.

  9. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  10. Working with Simple Machines

    ERIC Educational Resources Information Center

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  11. Development of Novel, Simple, Multianalyte Sensors for Remote Environmental Analysis

    SciTech Connect

    Asher, Sanford A.

    2000-06-01

    We will develop simple, inexpensive new chemical sensing materials which can be used as visual color test strips to sensitively and selectively report on the concentration and identity of environmental pollutants such as cations of Pb, U, Pu, Sr, Hg, Cs, Co as well as other species. We will develop inexpensive chemical test strips which can be immersed in water to determine these analytes in the field. We will also develop arrays of these chemical sensing materials which will be attached to fiber optic bundles to be used as rugged multichannel optrodes to simultaneously monitor numerous analytes remotely in hostile environments. These sensing materials are based on the intelligent polymerized crystalline colloidal array (PCCA) technology we recently developed. This sensing motif utilizes a mesoscopically periodic array of colloidal particles polymerized into an acrylamide hydrogel. This array Bragg diffracts light in the visible spectral region due to the periodic array of colloidal particles. This material also contains chelating agents for the analytes of interest. When an analyte binds, its charge is immobilized within the acrylamide hydrogel. The resulting Donnan potential causes an osmotic pressure which swells the array proportional to the concentration of analyte bound. The diffracted wavelength shifts and the color changes. The change in the wavelength diffracted reports on the identity and concentration of the target analyte. Our successful development of these simple, inexpensive highly sensitive chemical sensing optrodes, which are easily coupled to simple optical instrumentation, could revolutionize environmental monitoring. In addition, we will develop highly rugged versions, which can be attached to core penetrometers and which can be used to determine analytes in buried core samples. Research Progress and Implications This report summarizes work after 21 months of a three year project. We have developed a new method to crosslink our PCCA sensing

  12. A theoretical analysis of optimum consumer population and its control.

    PubMed

    Jiang, Z; Mao, Z; Wang, H

    1994-01-01

    Material production is related to population consumption in every society. Consumption also constantly transforms materials, energy, and information. In this sense, consumption provides both impetus for material production and a self-adapting mechanism for population development and control. Population structure variables affecting economic production can be divided according to non-adults, working-age work force and the elderly, social status, and urban-rural structure. The consumptive structures among people of different social status reflect different needs for social and economic development. The theoretical calculation of the consumer population in the national economy demonstrates that the national income in a certain year of a given national economy equals consumption fund plus accumulation fund where consumption fund includes social consumption fund and residential consumption fund. Social consumption fund is spent mostly on public utilities, administrative management, national defense, education, public health and urban construction, as well as on environment management and disaster relief. The residential consumption fund can be divided into basic expenditure such as clothing, food, shelter and transportation, and self-improvement expenditure such as recreation, education, and travel. As a result of economic development, not only the percentage of the expenditure on food will decrease and the percentage of the expenditure on clothing, shelter, transportation, and other daily necessities will increase, but expenses on recreation and education also will grow. Residential consumption is divided into subsistence consumption (Type I consumption) and self-improvement (recreation and education) consumption (Type II consumption) in order to determine consumer population and the degree of urbanization and its impact upon social and economic development. A moderate consumer population model of urban and rural areas was established by using the urban and rural

  13. Large deviation analysis of a simple information engine.

    PubMed

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  14. Large deviation analysis of a simple information engine

    NASA Astrophysics Data System (ADS)

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  15. A simple flow analysis of diffuser-getter-diffuser systems

    SciTech Connect

    Klein, J. E.; Howard, D. W.

    2008-07-15

    Tritium clean-up systems typically deploy gas processing technologies between stages of palladium-silver (Pd/Ag) diffusers/permeators. The number of diffusers positioned before and after a gas clean-up process to obtain optimal system performance will vary with feed gas inert composition. A simple method to analyze optimal diffuser configuration is presented. The method assumes equilibrium across the Pd/Ag tubes and system flows are limited by diffuser vacuum pump speeds preceding or following the clean-up process. A plot of system feed as a function of inert feed gas composition for various diffuser configuration allows selection of a diffuser configuration for maximum throughput based on feed gas composition. (authors)

  16. Simple bounds on limit loads by elastic finite element analysis

    SciTech Connect

    Mackenzie, D.; Nadarajah, C.; Shi, J.; Boyle, J.T. . Dept. of Mechanical Engineering)

    1993-02-01

    A method for bounding limit loads by an iterative elastic continuum finite element analysis procedure, referred to as the elastic compensation method, is proposed. A number of sample problems are considered, based on both exact solutions and finite element analysis, and it is concluded that the method may be used to obtain limit-load bounds for pressure vessel design by analysis applications with useful accuracy.

  17. Confluence Analysis for Distributed Programs: A Model-Theoretic Approach

    DTIC Science & Technology

    2011-12-18

    Dedalus language, we will refer to two run- ning examples for the remainder of the paper. Example 7. A simple asynchronous marriage ceremony...distributed commit protocols and marriage ceremonies [22]. For simplicity (and felicity), Example 7 presents a simple asynchronous voting program with a...fixed set of members: a bride and a groom. The marriage is off (runaway() is true) if one party says “I do” and the other does not. However, the Dedalus

  18. Toward theoretical analysis of long-range proton transfer kinetics in biomolecular pumps.

    PubMed

    König, P H; Ghosh, N; Hoffmann, M; Elstner, M; Tajkhorshid, E; Frauenheim, Th; Cui, Q

    2006-01-19

    Motivated by the long-term goal of theoretically analyzing long-range proton transfer (PT) kinetics in biomolecular pumps, researchers made a number of technical developments in the framework of quantum mechanics-molecular mechanics (QM/MM) simulations. A set of collective reaction coordinates is proposed for characterizing the progress of long-range proton transfers; unlike previous suggestions, the new coordinates can describe PT along highly nonlinear three-dimensional pathways. Calculations using a realistic model of carbonic anhydrase demonstrated that adiabatic mapping using these collective coordinates gives reliable energetics and critical geometrical parameters as compared to minimum energy path calculations, which suggests that the new coordinates can be effectively used as reaction coordinate in potential of mean force calculations for long-range PT in complex systems. In addition, the generalized solvent boundary potential was implemented in the QM/MM framework for rectangular geometries, which is useful for studying reactions in membrane systems. The resulting protocol was found to produce water structure in the interior of aquaporin consistent with previous studies including a much larger number of explicit solvent and lipid molecules. The effect of electrostatics for PT through a membrane protein was also illustrated with a simple model channel embedded in different dielectric continuum environments. The encouraging results observed so far suggest that robust theoretical analysis of long-range PT kinetics in biomolecular pumps can soon be realized in a QM/MM framework.

  19. Towards theoretical analysis of long-range proton transfer kinetics in biomolecular pumps

    PubMed Central

    König, P. H.; Ghosh, N.; Hoffmann, M.; Elstner, M.; Tajkhorshid, E.; Frauenheim, Th.; Cui, Q.

    2008-01-01

    Motivated by the long-term goal of theoretically analyzing long-range proton transfer (PT) kinetics in biomolecular pumps, a number of technical developments were made in the framework of QM/MM simulations. A set of collective reaction co-ordinates is proposed for characterizing the progress of long-range proton transfers; unlike previous suggestions, the new coordinates can describe PT along highly non-linear three-dimensional pathways. Calculations using a realistic model of carbonic anhydrase demonstrated that adiabatic mapping using these collective coordinates gives reliable energetics and critical geometrical parameters as compared to minimum energy path calculations, which suggests that the new coordinates can be effectively used as reaction coordinate in potential of mean force calculations for long-range PT in complex systems. In addition, the generalized solvent boundary potential was implemented in the QM/MM framework for rectangular geometries, which is useful for studying reactions in membrane systems. The resulting protocol was found to produce water structure in the interior of aquaporin consistent with previous studies including much larger number of explicit solvent and lipid molecules. The effect of electrostatics for PT through membrane protein was also illustrated with a simple model channel embedded in different dielectric continuum environments. The encouraging results observed so far suggest that robust theoretical analysis of long-range PT kinetics in biomolecular pumps can soon be realized in a QM/MM framework. PMID:16405327

  20. Pre/Post Data Analysis - Simple or Is It?

    NASA Technical Reports Server (NTRS)

    Feiveson, Al; Fiedler, James; Ploutz-Snyder, Robert

    2011-01-01

    This slide presentation reviews some of the problems of data analysis in analyzing pre and post data. Using as an example, ankle extensor strength (AES) experiments, to measure bone density loss during bed rest, the presentation discusses several questions: (1) How should we describe change? (2) Common analysis methods for comparing post to pre results. (3) What do we mean by "% change"? and (4) What are we testing when we compare % changes?

  1. Analysis of Poisson frequency data under a simple crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-02-01

    When the frequency of occurrence for an event of interest follows a Poisson distribution, we develop asymptotic and exact procedures for testing non-equality, non-inferiority and equivalence, as well as asymptotic and exact interval estimators for the ratio of mean frequencies between two treatments under a simple crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in a variety of situations. We note that all asymptotic test procedures developed here can generally perform well with respect to Type I error and can be preferable to the exact test procedure with respect to power if the number of patients per group is moderate or large. We further find that in these cases the asymptotic interval estimator with the logarithmic transformation can be more precise than the exact interval estimator without sacrificing the accuracy with respect to the coverage probability. However, the exact test procedure and exact interval estimator can be of use when the number of patients per group is small. We use a double-blind randomized crossover trial comparing salmeterol with a placebo in exacerbations of asthma to illustrate the practical use of these estimators.

  2. Development of Novel, Simple, Multianalyte Sensors For Remote Environmental Analysis

    SciTech Connect

    Asher, Sanfor A.

    1999-06-01

    We will develop simple, inexpensive new chemical sensing materials which can be used as visual color test strips to sensitively and selectively report on the concentration and identity of environmental pollutants such as cations of Pb, U, Pu, Sr, Hg, Cs, Co as well as other species. We will develop inexpensive chemical test strips which can be immersed in water to determine these analytes in the field. We will also develop arrays of these chemical sensing materials which will be attached to fiber optic bundles to be used as rugged multichannel optrodes to simultaneously monitor numerous analytes remotely in hostile environments. These sensing materials are based on the intelligent polymerized crystalline colloidal array (PCCA) technology we recently developed. This sensing motif utilizes a mesoscopically periodic array of colloidal particles polymerized into an acrylamide hydrogel. This array Bragg diffracts light in the visible spectral region due to the periodic array of colloidal particles. This material also contains chelating agents for the analytes of interest. When an analyte binds, its charge is immobilized within the acrylamide hydrogel. The resulting Donnan potential causes an osmotic pressure which swells the array proportional to the concentration of analyte bound. The diffracted wavelength shifts and the color changes. The change in the wavelength diffracted reports on the identity and concentration of the target analyte.

  3. Laser frequency stability: a simple approach for a quantitative analysis

    NASA Astrophysics Data System (ADS)

    Cabral, Alexandre; Abreu, Manuel; Rebordão, José M.

    2011-05-01

    The characterization of the laser linewidth and laser frequency stability is critically important for the evaluation of a metrology system performance when the working principle is based in interferometric processes. In particular, the midterm stability range, corresponding to noise in the hundreds of hertz to kilohertz bandwidth, affects strongly the measurement final accuracy when working at measurement rates at the ksample/s level. In this case, it is of crucial importance to know the uncertainty associated to the measurement of the laser instantaneous frequency and what is the variance of this value within the measurement period. In this paper we present a simple method to measure the frequency noise and obtain the Allan variance statistics for an External Cavity Diode Laser (ECDL) used in a Frequency Sweeping Interferometry (FSI) scheme for long distance high accuracy measurements. For this type of lasers, the main contributors affecting the midterm stability are the current and technical noise, including thermal and mechanical fluctuations, optical feedback, as well as the feedback stabilization techniques employed to reduce acoustic disturbances. The proposed method is based in the principle of delayed interferometry, where the variation of the laser center frequency is characterized for measurement conditions in the kilohertz range. The final accuracy of the metrology system is evaluated in accordance with the laser stability characteristics obtained by this method.

  4. Stability analysis of a simple rheonomic nonholonomic constrained system

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Liu, Shi-Xing; Mei, Feng-Xing

    2016-12-01

    It is a difficult problem to study the stability of the rheonomic and nonholonomic mechanical systems. Especially it is difficult to construct the Lyapunov function directly from the differential equation. But the gradient system is exactly suitable to study the stability of a dynamical system with the aid of the Lyapunov function. The stability of the solution for a simple rheonomic nonholonomic constrained system is studied in this paper. Firstly, the differential equations of motion of the system are established. Secondly, a problem in which the generalized forces are exerted on the system such that the solution is stable is proposed. Finally, the stable solutions of the rheonomic nonholonomic system can be constructed by using the gradient systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 11272050, 11202090, 11472124, 11572034, and 11572145), the Science and Technology Research Project of Liaoning Province, China (Grant No. L2013005), China Postdoctoral Science Foundation (Grant No. 2014M560203), and the Doctor Start-up Fund in Liaoning Province of China (Grant No. 20141050).

  5. Theoretical analysis and experimental verification on optical rotational Doppler effect.

    PubMed

    Zhou, Hailong; Fu, Dongzhi; Dong, Jianji; Zhang, Pei; Zhang, Xinliang

    2016-05-02

    We present a theoretical model to sufficiently investigate the optical rotational Doppler effect based on modal expansion method. We find that the frequency shift content is only determined by the surface of spinning object and the reduced Doppler shift is linear to the difference of mode index between input and output orbital angular momentum (OAM) light, and linear to the rotating speed of spinning object as well. An experiment is carried out to verify the theoretical model. We explicitly suggest that the spatial spiral phase distribution of spinning object determines the frequency content. The theoretical model makes us better understand the physical processes of rotational Doppler effect, and thus has many related application fields, such as detection of rotating bodies, imaging of surface and measurement of OAM light.

  6. A Simple Theoretical Analysis and Experimental Investigation of Burning Processes for Stick Propellant

    DTIC Science & Technology

    1983-07-01

    LEM , U. Fortune R. Zastrow Rock Island, IL 61299 Commander US Army Watervllet Arsenal ATTN: SARWV-RD, R. Thierry Watervllet, NY 12189 Di...DISTRIBUTION LIST No. Of Copies Organization Rockwell International Rocketdyne Division ATTN: BA08 J. E. Flanagan J. Grey 6633 Canoga Avenue

  7. Positive Action Programmes for Women. 1. A Theoretical Analysis.

    ERIC Educational Resources Information Center

    Vogel-Polsky, Eliane

    1985-01-01

    The author discusses the theoretical aspects of positive action programs for women. In looking at the results achieved by the various laws and institutional machinery introduced in Western Europe to enforce equal pay and equal treatment for men and women in employment, she concludes that no notable progress has been made over the past 10 years.…

  8. The analysis of non-linear dynamic behavior (including snap-through) of postbuckled plates by simple analytical solution

    NASA Technical Reports Server (NTRS)

    Ng, C. F.

    1988-01-01

    Static postbuckling and nonlinear dynamic analysis of plates are usually accomplished by multimode analyses, although the methods are complicated and do not give straightforward understanding of the nonlinear behavior. Assuming single-mode transverse displacement, a simple formula is derived for the transverse load displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the static postbuckling displacement and nonlinear dynamic responses of postbuckled plates under sinusoidal or random excitation. Regions with softening and hardening spring behavior are identified. Also, the highly nonlinear motion of snap-through and its effects on the overall dynamic response can be easily interpreted using the single-mode formula. Theoretical results are compared with experimental results obtained using a buckled aluminum panel, using discrete frequency and broadband point excitation. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are found.

  9. Theoretical and Experimental Analysis of the Physics of Water Rockets

    ERIC Educational Resources Information Center

    Barrio-Perotti, R.; Blanco-Marigorta, E.; Fernandez-Francos, J.; Galdo-Vega, M.

    2010-01-01

    A simple rocket can be made using a plastic bottle filled with a volume of water and pressurized air. When opened, the air pressure pushes the water out of the bottle. This causes an increase in the bottle momentum so that it can be propelled to fairly long distances or heights. Water rockets are widely used as an educational activity, and several…

  10. Theoretical and Experimental Analysis of the Physics of Water Rockets

    ERIC Educational Resources Information Center

    Barrio-Perotti, R.; Blanco-Marigorta, E.; Fernandez-Francos, J.; Galdo-Vega, M.

    2010-01-01

    A simple rocket can be made using a plastic bottle filled with a volume of water and pressurized air. When opened, the air pressure pushes the water out of the bottle. This causes an increase in the bottle momentum so that it can be propelled to fairly long distances or heights. Water rockets are widely used as an educational activity, and several…

  11. Developmental Change in the Relation between Simple and Complex Spans: A Meta-Analysis

    ERIC Educational Resources Information Center

    Tillman, Carin M.

    2011-01-01

    In the present meta-analysis the effects of developmental level on the correlation between simple and complex span tasks were investigated. Simple span-complex span correlation coefficients presented in 52 independent samples (7,060 participants) were regressed on a variable representing mean age of sample (range: 4.96-22.80 years), using analyses…

  12. Developmental Change in the Relation between Simple and Complex Spans: A Meta-Analysis

    ERIC Educational Resources Information Center

    Tillman, Carin M.

    2011-01-01

    In the present meta-analysis the effects of developmental level on the correlation between simple and complex span tasks were investigated. Simple span-complex span correlation coefficients presented in 52 independent samples (7,060 participants) were regressed on a variable representing mean age of sample (range: 4.96-22.80 years), using analyses…

  13. Insights into Fourier Synthesis and Analysis: Part I--Using Simple Programs and Equipment.

    ERIC Educational Resources Information Center

    Moore, Guy S. M.

    1988-01-01

    Introduced is a unique generation method of Fourier series requiring simple mathematical skills and using computer programs. Discusses Fourier synthesis by microcomputer, and Fourier analysis with simple equipment. Shown are a circuit diagram, computer programs, monitor displays and tables of data. (YP)

  14. Theoretical Analysis of Positional Uncertainty in Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Coskun Kiraci, Ali; Toz, Gonul

    2016-10-01

    GNSS/INS system composed of Global Navigation Satellite System and Inertial Navigation System together can provide orientation parameters directly by the observations collected during the flight. Thus orientation parameters can be obtained by GNSS/INS integration process without any need for aero triangulation after the flight. In general, positional uncertainty can be estimated with known coordinates of Ground Control Points (GCP) which require field works such as marker construction and GNSS measurement leading additional cost to the project. Here the question arises what should be the theoretical uncertainty of point coordinates depending on the uncertainties of orientation parameters. In this study the contribution of each orientation parameter on positional uncertainty is examined and theoretical positional uncertainty is computed without GCP measurement for direct georeferencing using a graphical user interface developed in MATLAB.

  15. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  16. A Theoretical Model Analysis of Absorption of a Three level Diode Pumped Alkali Laser

    DTIC Science & Technology

    2009-03-01

    A THEORETICAL MODEL ANALYSIS OF ABSORPTION OF A THREE LEVEL DIODE PUMPED ALKALI LASER ...States Government. AFIT/GAP/ENP/09-M07 A THEORETICAL MODEL ANALYSIS OF ABSORPTION OF A THREE LEVEL DIODE PUMPED ALKALI LASER THESIS...This paper models the absorption phenomena of light in a three level diode pumped alkali laser system. Specifically this model calculates for a user

  17. A Measurement-Theoretic Analysis of the Fuzzy Logic Model of Perception.

    ERIC Educational Resources Information Center

    Crowther, Court S.; And Others

    1995-01-01

    The fuzzy logic model of perception (FLMP) is analyzed from a measurement-theoretic perspective. The choice rule of FLMP is shown to be equivalent to a version of the Rasch model. In fact, FLMP can be reparameterized as a simple two-category logit model. (SLD)

  18. A Measurement-Theoretic Analysis of the Fuzzy Logic Model of Perception.

    ERIC Educational Resources Information Center

    Crowther, Court S.; And Others

    1995-01-01

    The fuzzy logic model of perception (FLMP) is analyzed from a measurement-theoretic perspective. The choice rule of FLMP is shown to be equivalent to a version of the Rasch model. In fact, FLMP can be reparameterized as a simple two-category logit model. (SLD)

  19. Simple and clean determination of tetracyclines by flow injection analysis

    NASA Astrophysics Data System (ADS)

    Rodríguez, Michael Pérez; Pezza, Helena Redigolo; Pezza, Leonardo

    2016-01-01

    An environmentally reliable analytical methodology was developed for direct quantification of tetracycline (TC) and oxytetracycline (OTC) using continuous flow injection analysis with spectrophotometric detection. The method is based on the diazo coupling reaction between the tetracyclines and diazotized sulfanilic acid in a basic medium, resulting in the formation of an intense orange azo compound that presents maximum absorption at 434 nm. Experimental design was used to optimize the analytical conditions. The proposed technique was validated over the concentration range of 1 to 40 μg mL- 1, and was successfully applied to samples of commercial veterinary pharmaceuticals. The detection (LOD) and quantification (LOQ) limits were 0.40 and 1.35 μg mL- 1, respectively. The samples were also analyzed by an HPLC method, and the results showed agreement with the proposed technique. The new flow injection method can be immediately used for quality control purposes in the pharmaceutical industry, facilitating monitoring in real time during the production processes of tetracycline formulations for veterinary use.

  20. Nonlinear Phase Field Theory for Fracture and Twinning with Analysis of Simple Shear

    DTIC Science & Technology

    2015-09-01

    ARL-RP-0546 ● SEP 2015 US Army Research Laboratory Nonlinear Phase Field Theory for Fracture and Twinning with Analysis of Simple Shear by JD...for Fracture and Twinning with Analysis of Simple Shear by JD Clayton Weapons and Materials Research Directorate, ARL J Knap Computational and...Reprint 3. DATES COVERED (From - To) 15 May 2014–20 July 2015 4. TITLE AND SUBTITLE Nonlinear Phase Field Theory for Fracture and Twinning with

  1. Statistical analysis of simple repeats in the human genome

    NASA Astrophysics Data System (ADS)

    Piazza, F.; Liò, P.

    2005-03-01

    The human genome contains repetitive DNA at different level of sequence length, number and dispersion. Highly repetitive DNA is particularly rich in homo- and di-nucleotide repeats, while middle repetitive DNA is rich of families of interspersed, mobile elements hundreds of base pairs (bp) long, among which belong the Alu families. A link between homo- and di-polymeric tracts and mobile elements has been recently highlighted. In particular, the mobility of Alu repeats, which form 10% of the human genome, has been correlated with the length of poly(A) tracts located at one end of the Alu. These tracts have a rigid and non-bendable structure and have an inhibitory effect on nucleosomes, which normally compact the DNA. We performed a statistical analysis of the genome-wide distribution of lengths and inter-tract separations of poly(X) and poly(XY) tracts in the human genome. Our study shows that in humans the length distributions of these sequences reflect the dynamics of their expansion and DNA replication. By means of general tools from linguistics, we show that the latter play the role of highly-significant content-bearing terms in the DNA text. Furthermore, we find that such tracts are positioned in a non-random fashion, with an apparent periodicity of 150 bases. This allows us to extend the link between repetitive, highly mobile elements such as Alus and low-complexity words in human DNA. More precisely, we show that Alus are sources of poly(X) tracts, which in turn affect in a subtle way the combination and diversification of gene expression and the fixation of multigene families.

  2. Sparsity analysis of DS spread spectrum signals via theoretical analysis and dictionary learning

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Wu, Bin; Wang, Bo

    2017-04-01

    For the purpose of solving the problem of high sampling rate and massive data processing brought by high bandwidth in the field of Aerospace Communication, researchers applied CS theory to spread spectrum signal processing. Sparsity analysis is the prerequisite for the application of CS theory. This paper studies the sparsity of the DS spread spectrum signals, which is the most common kind of signal in the current TT&C systems. Based on the theoretical analysis we get the sparse dictionary, then the dictionary is optimized by K-SVD dictionary learning algorithm. The simulation results show that the two signals have strong sparsity in the constructed sparse base dictionary, which lays a theoretical foundation for the TT&C spread spectrum signal processing based on CS theory.

  3. CMS Made Simple: A ROOT-less workflow for educating undergraduates about CMS data analysis

    NASA Astrophysics Data System (ADS)

    Muenkel, Jessica; Bellis, Matthew; CMS Collaboration

    2015-04-01

    Involving students in research is an important part of the undergraduate experience. By working on a problem where the answer is unknown, students apply what they learn in the classroom to a real-world challenge, which reinforce the more theoretical aspects of their courses. Many undergraduates are drawn to the idea of working on big particle physics experiments like CMS (Compact Muon Solenoid) at the Large Hadron Collider (LHC), but the threshold is high for them to contribute to an analysis. Those of us who perform research spend much of our time debugging scripts and C + + code, usually specific to that one experiment. If an undergraduate is not going on to grad school in particle physics, much of that work can be wasted on them. However, there are many general skills that students can learn by working on parts of a particle physics analysis (relativistic kinematics, statistics, coding, etc.), and so it is worth trying to lower the threshold to engage students. In this poster, we present a suite of datasets and tools, built around the Python programming language that simplify the workflow and allow a student to interact with CMS data immediately. While it is a staple of the particle physics community, we avoid using the ROOT toolkit, so as to stick to more broadly used tools that the students can take with them. These tools are being used to supplement the educational examples for the CERN Open Data Portal, a project to make LHC datasets available to the general public. The successes and limitations of CMS Made Simple will be discussed and links are provided to these tools.

  4. Extending and automating a Systems-Theoretic hazard analysis for requirements generation and analysis.

    SciTech Connect

    Thomas, John

    2012-05-01

    Systems Theoretic Process Analysis (STPA) is a powerful new hazard analysis method designed to go beyond traditional safety techniques - such as Fault Tree Analysis (FTA) - that overlook important causes of accidents like flawed requirements, dysfunctional component interactions, and software errors. While proving to be very effective on real systems, no formal structure has been defined for STPA and its application has been ad-hoc with no rigorous procedures or model-based design tools. This report defines a formal mathematical structure underlying STPA and describes a procedure for systematically performing an STPA analysis based on that structure. A method for using the results of the hazard analysis to generate formal safety-critical, model-based system and software requirements is also presented. Techniques to automate both the analysis and the requirements generation are introduced, as well as a method to detect conflicts between the safety and other functional model-based requirements during early development of the system.

  5. Anion order in perovskites: a group-theoretical analysis.

    PubMed

    Talanov, M V; Shirokov, V B; Talanov, V M

    2016-03-01

    Anion ordering in the structure of cubic perovskite has been investigated by the group-theoretical method. The possibility of the existence of 261 ordered low-symmetry structures, each with a unique space-group symmetry, is established. These results include five binary and 14 ternary anion superstructures. The 261 idealized anion-ordered perovskite structures are considered as aristotypes, giving rise to different derivatives. The structures of these derivatives are formed by tilting of BO6 octahedra, distortions caused by the cooperative Jahn-Teller effect and other physical effects. Some derivatives of aristotypes exist as real substances, and some as virtual ones. A classification of aristotypes of anion superstructures in perovskite is proposed: the AX class (the simultaneous ordering of A cations and anions in cubic perovskite structure), the BX class (the simultaneous ordering of B cations and anions) and the X class (the ordering of anions only in cubic perovskite structure). In most perovskites anion ordering is accompanied by cation ordering. Therefore, the main classes of anion order in perovskites are the AX and BX classes. The calculated structures of some anion superstructures are reported. Comparison of predictions and experimentally investigated anion superstructures shows coherency of theoretical and experimental results.

  6. An information theoretic synthesis and analysis of Compton profiles

    NASA Astrophysics Data System (ADS)

    Sears, Stephen B.; Gadre, Shridhar R.

    1981-11-01

    The information theoretic technique of entropy maximization is applied to Compton profile (CP) data, employing single and double distribution moments theoretic quantities—Shannon entropies, information contents, and surprisals—are also presented. Based upon the ''sum'' constraint

  7. Task Analysis in Instructional Program Development. Theoretical Paper No. 52.

    ERIC Educational Resources Information Center

    Bernard, Michael E.

    A review of task analysis procedures beginning with the military training and systems development approach and covering the more recent work of Gagne, Klausmeier, Merrill, Resnick, and others is presented along with a plan for effective instruction based on the review of task analysis. Literature dealing with the use of task analysis in programmed…

  8. Experimental and theoretical analysis of long waves transformation on a sloping beach

    NASA Astrophysics Data System (ADS)

    Szmidt, K.; Staroszczyk, R.; Hedzielski, B.

    2009-09-01

    Transformation of long water waves on a sloping beach has been investigated, both experimentally and theoretically. Experiments have been conducted in a 64 m long and 0.6 m wide laboratory flume at the Institute of Hydro-Engineering, Polish Academy of Sciences, in Gdansk, Poland. Plane monochromatic waves have been generated by a piston-type wave maker situated at one end of the flume, and the sloping beach has been modelled by an inclined rigid ramp, of the slope equal to either 10 or 15 per cent, placed at a distance of 12 m from the generator wall. The water wave parameters have been recorded by a set of gauges installed along the flume, both in its constant- and varying-depth parts. Additionally, the run-up of the wave has been measured by a special conductivity gauge mounted on the ramp along the wave propagation direction. The experiments have been carried out for a wide range of wave lengths and amplitudes, falling, however, into the long-wave regime. The theoretical analysis of the wave propagation phenomenon has been performed by solving the problem in Lagrangian coordinates, since this permits simple formulation of boundary conditions on the moving boundaries of the fluid domain. However, the price for it is a more complicated structure of equations describing the fluid motion, compared to more traditional approaches based on the Eulerian description. In order to simplify the analysis, the shallow water approximation is applied. An essential simplification, on which the theoretical formulation proposed in this work is based, is a kinematical assumption that fluid motion is "columnar"; that is, the vertical material lines of fluid particles remain vertical during the motion. Fundamental equations of the theoretical description of the problem have been derived by following the Hamilton principle. Owing to the above kinematical assumption on the fluid motion, all the integrands in the action integral are expressed in terms of only the fluid horizontal

  9. A Theoretical Analysis of the k-Satisfiability Search Space

    NASA Astrophysics Data System (ADS)

    Sutton, Andrew M.; Howe, Adele E.; Whitley, L. Darrell

    Local search algorithms perform surprisingly well on the k-satisfiability (k-SAT) problem. However, few theoretical analyses of the k-SAT search space exist. In this paper we study the search space of the k-SAT problem and show that it can be analyzed by a decomposition. In particular, we prove that the objective function can be represented as a superposition of exactly k elementary landscapes. We show that this decomposition allows us to immediately compute the expectation of the objective function evaluated across neighboring points. We use this result to prove previously unknown bounds for local maxima and plateau width in the 3-SAT search space. We compute these bounds numerically for a number of instances and show that they are non-trivial across a large set of benchmarks.

  10. Theoretical analysis of tin incorporated group IV alloy based QWIP

    NASA Astrophysics Data System (ADS)

    Pareek, Prakash; Das, Mukul K.; Kumar, S.

    2017-07-01

    Detailed theoretical investigation on the frequency response, responsivity and detectivity of tin incorporated GeSn based quantum well infrared photodetector (QWIP) is presented in this paper. Rate equation and continuity equation in the well are solved simultaneously to obtained photo generated current. Quantum mechanical carrier transport like carrier capture in QW, escape of carrier from the well due to thermionic emission and tunneling are considered in this calculation. Impact of Sn composition in the GeSn well on the frequency response, bandwidth, responsivity and detectivity are studied. Results show that Sn concentration and applied bias have important role on the performance of the device. Significant bandwidth is obtained at low reverse bias voltage, e.g., 150 GHz is obtained at 0.14 V bias for single Ge0.83Sn0.17 layer. Detectivity, in the range of 107 cm Hz1/2 W-1 is obtained for particular choice of Sn-composition and bias.

  11. Theoretical and software considerations for nonlinear dynamic analysis

    NASA Technical Reports Server (NTRS)

    Schmidt, R. J.; Dodds, R. H., Jr.

    1983-01-01

    In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.

  12. Analysis of NASA JP-4 fire tests data and development of a simple fire model

    NASA Technical Reports Server (NTRS)

    Raj, P.

    1980-01-01

    The temperature, velocity and species concentration data obtained during the NASA fire tests (3m, 7.5m and 15m diameter JP-4 fires) were analyzed. Utilizing the data analysis, a sample theoretical model was formulated to predict the temperature and velocity profiles in JP-4 fires. The theoretical model, which does not take into account the detailed chemistry of combustion, is capable of predicting the extent of necking of the fire near its base.

  13. GENERAL: Theoretical investigation of synchronous totally asymmetric simple exclusion process on lattices with two consecutive junctions in multiple-input-multiple-output traffic system

    NASA Astrophysics Data System (ADS)

    Xiao, Song; Cai, Jiu-Ju; Wang, Rui-Li; Liu, Ming-Zhe; Liu, Fei

    2009-12-01

    In this paper, we study the dynamics of the synchronous totally asymmetric simple exclusion process (TASEP) on lattices with two consecutive junctions in a multiple-input-multiple-output (MIMO) traffic system, which consists of m sub-chains for the input and the output, respectively. In the middle of the system, there are n (n < m) sub-chains via two consecutive junctions linking m sub-chains of input and m sub-chains of output, respectively. This configuration is a type of complex geometry that is relevant to many biological processes as well as to vehicular traffic flow. We use a mean-field approach to calculate this typical geometry and obtain the theoretical results for stationary particle currents, density profiles, and a phase diagram. With the values of m and n synchronously increasing, the vertical phase boundary moves toward the right and the horizontal phase boundary moves toward the upside in the phase diagram. The boundary conditions of the system as well as the numbers of input and output determine the no-equilibrium stationary states, stationary-states phases, and phase boundaries. We use the results to compare with computer simulations and find that they are in very good agreement with each other.

  14. Theoretical Analysis Of A Sagnac Fiber Optic Interferometer

    NASA Astrophysics Data System (ADS)

    Szustakowski, Mieczyslaw; Jaroszewicz, Leszek R.

    1990-04-01

    The analytical description of a closed optical fiber interferometer system based on Jones calculus is presented. This calculus adapation for the optical fiber elements analysis allows for a uniform description of system built on the basis of a single-mode optical fiber. The analysis of a Sagnac fiber optic interferometer is an example of this method application.

  15. Theoretical analysis of single molecule spectroscopy lineshapes of conjugated polymers

    NASA Astrophysics Data System (ADS)

    Devi, Murali

    Conjugated Polymers(CPs) exhibit a wide range of highly tunable optical properties. Quantitative and detailed understanding of the nature of excitons responsible for such a rich optical behavior has significant implications for better utilization of CPs for more efficient plastic solar cells and other novel optoelectronic devices. In general, samples of CPs are plagued with substantial inhomogeneous broadening due to various sources of disorder. Single molecule emission spectroscopy (SMES) offers a unique opportunity to investigate the energetics and dynamics of excitons and their interactions with phonon modes. The major subject of the present thesis is to analyze and understand room temperature SMES lineshapes for a particular CP, called poly(2,5-di-(2'-ethylhexyloxy)-1,4-phenylenevinylene) (DEH-PPV). A minimal quantum mechanical model of a two-level system coupled to a Brownian oscillator bath is utilized. The main objective is to identify the set of model parameters best fitting a SMES lineshape for each of about 200 samples of DEH-PPV, from which new insight into the nature of exciton-bath coupling can be gained. This project also entails developing a reliable computational methodology for quantum mechanical modeling of spectral lineshapes in general. Well-known optimization techniques such as gradient descent, genetic algorithms, and heuristic searches have been tested, employing an L2 measure between theoretical and experimental lineshapes for guiding the optimization. However, all of these tend to result in theoretical lineshapes qualitatively different from experimental ones. This is attributed to the ruggedness of the parameter space and inadequateness of the L2 measure. On the other hand, when the dynamic reduction of the original parameter space to a 2-parameter space through feature searching and visualization of the search space paths using directed acyclic graphs(DAGs), the qualitative nature of the fitting improved significantly. For a more

  16. Graph theoretical analysis of EEG functional connectivity during music perception.

    PubMed

    Wu, Junjie; Zhang, Junsong; Liu, Chu; Liu, Dongwei; Ding, Xiaojun; Zhou, Changle

    2012-11-05

    The present study evaluated the effect of music on large-scale structure of functional brain networks using graph theoretical concepts. While most studies on music perception used Western music as an acoustic stimulus, Guqin music, representative of Eastern music, was selected for this experiment to increase our knowledge of music perception. Electroencephalography (EEG) was recorded from non-musician volunteers in three conditions: Guqin music, noise and silence backgrounds. Phase coherence was calculated in the alpha band and between all pairs of EEG channels to construct correlation matrices. Each resulting matrix was converted into a weighted graph using a threshold, and two network measures: the clustering coefficient and characteristic path length were calculated. Music perception was found to display a higher level mean phase coherence. Over the whole range of thresholds, the clustering coefficient was larger while listening to music, whereas the path length was smaller. Networks in music background still had a shorter characteristic path length even after the correction for differences in mean synchronization level among background conditions. This topological change indicated a more optimal structure under music perception. Thus, prominent small-world properties are confirmed in functional brain networks. Furthermore, music perception shows an increase of functional connectivity and an enhancement of small-world network organizations.

  17. A game theoretic analysis of research data sharing

    PubMed Central

    Wiersma, Paulien H.; van Weerden, Anne; Schieving, Feike

    2015-01-01

    While reusing research data has evident benefits for the scientific community as a whole, decisions to archive and share these data are primarily made by individual researchers. In this paper we analyse, within a game theoretical framework, how sharing and reuse of research data affect individuals who share or do not share their datasets. We construct a model in which there is a cost associated with sharing datasets whereas reusing such sets implies a benefit. In our calculations, conflicting interests appear for researchers. Individual researchers are always better off not sharing and omitting the sharing cost, at the same time both sharing and not sharing researchers are better off if (almost) all researchers share. Namely, the more researchers share, the more benefit can be gained by the reuse of those datasets. We simulated several policy measures to increase benefits for researchers sharing or reusing datasets. Results point out that, although policies should be able to increase the rate of sharing researchers, and increased discoverability and dataset quality could partly compensate for costs, a better measure would be to directly lower the cost for sharing, or even turn it into a (citation-) benefit. Making data available would in that case become the most profitable, and therefore stable, strategy. This means researchers would willingly make their datasets available, and arguably in the best possible way to enable reuse. PMID:26401453

  18. A theoretical analysis of water transport through chondrocytes.

    PubMed

    Ateshian, G A; Costa, K D; Hung, C T

    2007-01-01

    Because of the avascular nature of adult cartilage, nutrients and waste products are transported to and from the chondrocytes by diffusion and convection through the extracellular matrix. The convective interstitial fluid flow within and around chondrocytes is poorly understood. This theoretical study demonstrates that the incorporation of a semi-permeable membrane when modeling the chondrocyte leads to the following findings: under mechanical loading of an isolated chondrocyte the intracellular fluid pressure is on the order of tens of Pascals and the transmembrane fluid outflow, on the order of picometers per second, takes several days to subside; consequently, the chondrocyte behaves practically as an incompressible solid whenever the loading duration is on the order of minutes or hours. When embedded in its extracellular matrix (ECM), the chondrocyte response is substantially different. Mechanical loading of the tissue leads to a fluid pressure difference between intracellular and extracellular compartments on the order of tens of kilopascals and the transmembrane outflow, on the order of a nanometer per second, subsides in about 1 h. The volume of the chondrocyte decreases concomitantly with that of the ECM. The interstitial fluid flow in the extracellular matrix is directed around the cell, with peak values on the order of tens of nanometers per second. The viscous fluid shear stress acting on the cell surface is several orders of magnitude smaller than the solid matrix shear stresses resulting from the ECM deformation. These results provide new insight toward our understanding of water transport in chondrocytes.

  19. Theoretical limits on detection and analysis of small earthquakes

    NASA Astrophysics Data System (ADS)

    Kwiatek, Grzegorz; Ben-Zion, Yehuda

    2016-08-01

    We investigate theoretical limits on detection and reliable estimates of source characteristics of small earthquakes using synthetic seismograms for shear/tensile dislocations on kinematic circular ruptures and observed seismic noise and properties of several acquisition systems (instrument response, sampling rate). Simulated source time functions for shear/tensile dislocation events with different magnitudes, static stress drops, and rupture velocities provide estimates for the amplitude and frequency content of P and S phases at various observation angles. The source time functions are convolved with a Green's function for a homogenous solid assuming given P, S wave velocities and attenuation coefficients and a given instrument response. The synthetic waveforms are superposed with average levels of the observed ambient seismic noise up to 1 kHz. The combined seismograms are used to calculate signal-to-noise ratios and expected frequency content of P and S phases at various locations. The synthetic simulations of signal-to-noise ratio reproduce observed ratios extracted from several well-recorded data sets. The results provide guidelines on detection of small events in various geological environments, along with information relevant to reliable analyses of earthquake source properties.

  20. A computational and theoretical analysis of falling frequency VLF emissions

    NASA Astrophysics Data System (ADS)

    Nunn, David; Omura, Yoshiharu

    2012-08-01

    Recently much progress has been made in the simulation and theoretical understanding of rising frequency triggered emissions and rising chorus. Both PIC and Vlasov VHS codes produce risers in the region downstream from the equator toward which the VLF waves are traveling. The VHS code only produces fallers or downward hooks with difficulty due to the coherent nature of wave particle interaction across the equator. With the VHS code we now confine the interaction region to be the region upstream from the equator, where inhomogeneity factor S is positive. This suppresses correlated wave particle interaction effects across the equator and the tendency of the code to trigger risers, and permits the formation of a proper falling tone generation region. The VHS code now easily and reproducibly triggers falling tones. The evolution of resonant particle current JE in space and time shows a generation point at -5224 km and the wavefield undergoes amplification of some 25 dB in traversing the nonlinear generation region. The current component parallel to wave magnetic field (JB) is positive, whereas it is negative for risers. The resonant particle trap shows an enhanced distribution function or `hill', whereas risers have a `hole'. According to recent theory (Omura et al., 2008, 2009) sweeping frequency is due primarily to the advective term. The nonlinear frequency shift term is now negative (˜-12 Hz) and the sweep rate of -800 Hz/s is approximately nonlinear frequency shift divided by TN, the transition time, of the order of a trapping time.

  1. Theoretical analysis of high-resolution digital mammography

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Sankararaman; Karellas, Andrew; Vedantham, Srinivasan; Sechopoulos, Ioannis

    2006-06-01

    The performance of a high-resolution charge coupled device-based full-field digital mammography imager was analysed using a mathematical framework based on an adaptation of cascaded linear systems theory described by other investigators. This work has been conducted in order to understand the impact of various design parameters on the physical performance characteristics of the imager. Specifically, the effect of pixel size, scintillator thickness and packing density, x-ray spectra, air kerma, dark current, charge integration time, and pixel fill-factor on the frequency dependent detective quantum efficiency was studied using a charge-coupled device as a reference platform. The imaging system was modelled as a series of physical processes with gain and spatial spreading. For each stage, the signal and noise power spectra were computed and propagated through the imaging chain as inputs to subsequent stages. Good agreement between experimental and theoretical predictions was obtained for various x-ray spectral conditions that were investigated. The modulation transfer function, MTF(f) and detective quantum efficiency DQE(f) characteristics obtained in this study are encouraging and comparable to other digital mammography systems. The results of this study strongly suggest the feasibility of large area scintillator-based digital mammography imagers with pixel sizes below 100 µm.

  2. Genome mapping by random anchoring: A discrete theoretical analysis

    NASA Astrophysics Data System (ADS)

    Zhang, M. Q.; Marr, T. G.

    1993-11-01

    As a part of the international human genome project, large-scale genomic maps of human and other model organisms are being generated. More recently, mapping using various anchoring (as opposed to the traditional "fingerprinting") strategies have been proposed based largely on mathematical models. In all of the theoretical work dealing with anchoring, an anchor has been idealized as a point on a continuous, infinite-length genome. In general, it is not desirable to make these assumptions, since in practice they may be violated under a variety of actual biological situations. Here we analyze a discrete model that can be used to predict the expected progress made when mapping by random anchoring. By virtue of keeping all three length scales (genome length, clone length, and probe length) finite, our results for the random anchoring strategy are derived in full generality, which contain previous results as special cases and hence can have broad application for planning mapping experiments or assessing the accuracy of the continuum models. Finally, we pose a challenging nonrandom anchoring model corresponding to a more efficient mapping scheme.

  3. Dissecting Situational Strength: Theoretical Analysis and Empirical Tests

    DTIC Science & Technology

    2012-09-01

    stronger task-orientation, regardless of their natural level of trait conscientiousness ( Bekkers , 2005; 7 Fleeson, 2007). Some sample...analysis. Journal of Applied Psychology, 92, 410-424. Bekkers , R. (2005). Participation in voluntary associations: Relations with resources, personality

  4. Graph theoretic analysis of protein interaction networks of eukaryotes

    NASA Astrophysics Data System (ADS)

    Goh, K.-I.; Kahng, B.; Kim, D.

    2005-11-01

    Owing to the recent progress in high-throughput experimental techniques, the datasets of large-scale protein interactions of prototypical multicellular species, the nematode worm Caenorhabditis elegans and the fruit fly Drosophila melanogaster, have been assayed. The datasets are obtained mainly by using the yeast hybrid method, which contains false-positive and false-negative simultaneously. Accordingly, while it is desirable to test such datasets through further wet experiments, here we invoke recent developed network theory to test such high-throughput datasets in a simple way. Based on the fact that the key biological processes indispensable to maintaining life are conserved across eukaryotic species, and the comparison of structural properties of the protein interaction networks (PINs) of the two species with those of the yeast PIN, we find that while the worm and yeast PIN datasets exhibit similar structural properties, the current fly dataset, though most comprehensively screened ever, does not reflect generic structural properties correctly as it is. The modularity is suppressed and the connectivity correlation is lacking. Addition of interologs to the current fly dataset increases the modularity and enhances the occurrence of triangular motifs as well. The connectivity correlation function of the fly, however, remains distinct under such interolog additions, for which we present a possible scenario through an in silico modeling.

  5. An Isotopic Dilution Experiment Using Liquid Scintillation: A Simple Two-System, Two-Phase Analysis.

    ERIC Educational Resources Information Center

    Moehs, Peter J.; Levine, Samuel

    1982-01-01

    A simple isotonic, dilution analysis whose principles apply to methods of more complex radioanalyses is described. Suitable for clinical and instrumental analysis chemistry students, experimental manipulations are kept to a minimum involving only aqueous extraction before counting. Background information, procedures, and results are discussed.…

  6. An Isotopic Dilution Experiment Using Liquid Scintillation: A Simple Two-System, Two-Phase Analysis.

    ERIC Educational Resources Information Center

    Moehs, Peter J.; Levine, Samuel

    1982-01-01

    A simple isotonic, dilution analysis whose principles apply to methods of more complex radioanalyses is described. Suitable for clinical and instrumental analysis chemistry students, experimental manipulations are kept to a minimum involving only aqueous extraction before counting. Background information, procedures, and results are discussed.…

  7. Graph theoretical analysis of resting magnetoencephalographic functional connectivity networks

    PubMed Central

    Rutter, Lindsay; Nadar, Sreenivasan R.; Holroyd, Tom; Carver, Frederick W.; Apud, Jose; Weinberger, Daniel R.; Coppola, Richard

    2013-01-01

    Complex networks have been observed to comprise small-world properties, believed to represent an optimal organization of local specialization and global integration of information processing at reduced wiring cost. Here, we applied magnitude squared coherence to resting magnetoencephalographic time series in reconstructed source space, acquired from controls and patients with schizophrenia, and generated frequency-dependent adjacency matrices modeling functional connectivity between virtual channels. After configuring undirected binary and weighted graphs, we found that all human networks demonstrated highly localized clustering and short characteristic path lengths. The most conservatively thresholded networks showed efficient wiring, with topographical distance between connected vertices amounting to one-third as observed in surrogate randomized topologies. Nodal degrees of the human networks conformed to a heavy-tailed exponentially truncated power-law, compatible with the existence of hubs, which included theta and alpha bilateral cerebellar tonsil, beta and gamma bilateral posterior cingulate, and bilateral thalamus across all frequencies. We conclude that all networks showed small-worldness, minimal physical connection distance, and skewed degree distributions characteristic of physically-embedded networks, and that these calculations derived from graph theoretical mathematics did not quantifiably distinguish between subject populations, independent of bandwidth. However, post-hoc measurements of edge computations at the scale of the individual vertex revealed trends of reduced gamma connectivity across the posterior medial parietal cortex in patients, an observation consistent with our prior resting activation study that found significant reduction of synthetic aperture magnetometry gamma power across similar regions. The basis of these small differences remains unclear. PMID:23874288

  8. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  9. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-06

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.

  10. An Optimality-Theoretic Analysis of Codas in Brazilian Portuguese

    ERIC Educational Resources Information Center

    Goodin-Mayeda, C. Elizabeth

    2015-01-01

    Brazilian Portuguese allows only /s, N, l, r/ syllable finally, and of these, only /s/ is realized faithfully (as well as /r/ for some speakers). In order to avoid unacceptable codas, dialects of Brazilian Portuguese employ such strategies as epenthesis, nasal absorption, debucalization, and gliding. The current analysis argues that codas in…

  11. Analysis of Theoretical Metaphors with Illustrations from Family Systems Theory.

    ERIC Educational Resources Information Center

    Rosenblatt, Paul C.

    Metaphoric analysis of family systems theory illustrates how metaphors and alternatives to those metaphors identify what a psychological theory has highlighted and obscured about the phenomena at its focus and how it has structured that phenomena. The most commonly used metaphors in family systems theory are the metaphors of system (system…

  12. Theoretical Consideration of Forcible Rape: A Critical Analysis.

    ERIC Educational Resources Information Center

    Clagett, Arthur F.

    1988-01-01

    Examined differences in hypothetical apperceptive fantasies of committing forcible rape, which are held by male subjects, as compared with the hypothetical apperceptive fantasies of being forcibly raped, held by the female subjects. Developed a critical analysis of social and cross-cultural variables affecting rape. (Author/ABL)

  13. An Optimality-Theoretic Analysis of Codas in Brazilian Portuguese

    ERIC Educational Resources Information Center

    Goodin-Mayeda, C. Elizabeth

    2015-01-01

    Brazilian Portuguese allows only /s, N, l, r/ syllable finally, and of these, only /s/ is realized faithfully (as well as /r/ for some speakers). In order to avoid unacceptable codas, dialects of Brazilian Portuguese employ such strategies as epenthesis, nasal absorption, debucalization, and gliding. The current analysis argues that codas in…

  14. Theoretical analysis of wake-induced parachute collapse

    SciTech Connect

    Spahr, H.R.; Wolf, D.F.

    1981-01-01

    During recent drop tests of a prototype weapon system, the parachute collapsed soon after it became fully inflated. The magnitude and duration of the collapses were severe enough to degrade parachute performance drastically. A computer-assisted analysis is presented which models parachute inflation, forebody and parachute wake generation, and interaction between the wake and the inflating or collapsing parachute. Comparison of the analysis results with full-scale drop test results shows good agreement for two parachute sizes; both parachutes were tested with and without permanent reefing. Computer-generated graphics (black and white drawings, color slides, and color movies) show the forebody and inflating parachute, the wake, and the wake and parachute interaction.

  15. Sequential Phenomena in Psychophysical Judgments: A Theoretical Analysis

    NASA Technical Reports Server (NTRS)

    Atkinson, R. C.; Carterette, E. C.; Kinchla, R. A.

    1962-01-01

    This paper deals with an analysis of psychophysical detection experiments designed to assess the limit of a human observer's level of sensitivity. A mathematical theory or the detection process is introduced that, in contrast to previous theories, provides an analysis of the sequential effects observed in psychophysical data. Two variations of the detection task are considered: information feedback and no-information feedback. In the feedback situation the subject is given information concerning the correctness of his responses, whereas in the no-feedback situation he is not. Data from a visual detection experiment with no-information feedback, and from an auditory detection experiment with information feedback are analyzed in terms of the theory. Finally, some general results are derived concerning the relationship between performance in the feedback situation and the no-feedback situation.

  16. Theoretical Innovations in Combustion Stability Research: Integrated Analysis and Computation

    DTIC Science & Technology

    2011-04-14

    presentation [2] has been made at a national conference of this subject. b.2-Thermomechanics of reactive gases Transient, spatially...Integrated Analysis and Computation 5a. CONTRACT NUMBER FA9550-10-C-0088 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) David Kassoy...KISS and JPL personnel. 15. SUBJECT TERMS Combustion, Thermomechanics, Turbulent Reacting Flow, Supercritical Gases , Rocket Engine Stability 16

  17. Theoretical modelling and meteorological analysis for the AASE mission

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark R.; Newman, Paul A.; Rosenfield, Joan E.; Stolarski, Richard S.

    1990-01-01

    Providing real time constituent data analysis and potential vorticity computations in support of the Airborne Arctic Stratospheric Experiment (AASE) is discussed. National Meteorological Center (NMC) meteorological data and potential vorticity computations derived from NMC data are projected onto aircraft coordinates and provided to the investigators in real time. Balloon and satellite constituent data are composited into modified Lagrangian mean coordinates. Various measurements are intercompared, trends deduced and reconstructions of constituent fields performed.

  18. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  19. Hydraulically interconnected vehicle suspension: theoretical and experimental ride analysis

    NASA Astrophysics Data System (ADS)

    Smith, Wade A.; Zhang, Nong; Jeyakumaran, Jeku

    2010-01-01

    In this paper, a previously derived model for the frequency-domain analysis of vehicles with hydraulically interconnected suspension (HIS) systems is applied to the ride analysis of a four-degrees of freedom roll-plane, half-car under a rough road input. The entire road surface is assumed to be a realisation of a two-dimensional Gaussian homogenous and isotropic random process. The frequency responses of the half-car, in terms of bounce and roll acceleration, suspension deflection and dynamic tyre forces, are obtained under the road input of a single profile represented by its power spectral density function. Simulation results obtained for the roll-plane half-car fitted with an HIS and those with conventional suspensions are compared in detail. In addition, sensitivity analysis of key parameters of the HIS to the ride performance is carried out through simulations. The paper also presents the experimental validation of the analytical results of the free and forced vibrations of the roll-plane half-car. The hydraulic and mechanical system layouts, data acquisition system and the external force actuation mechanism of the test set-up are described in detail. The methodology for free and forced vibration tests and the application of mathematical models to account for the effective damper valve pressure loss are explained. Results are provided for the free and forced vibration testing of the half-car with different mean operating pressures. Comparisons are also given between the test results and those obtained from the system model with estimated damper valve loss coefficients. Furthermore, discussions on the deficiencies and practical implications of the proposed model and suggestions for future investigation are provided. Finally, the key findings of the investigation on the ride performance of the roll-plane half-car are summarised.

  20. Graph theoretical analysis of complex networks in the brain.

    PubMed

    Stam, Cornelis J; Reijneveld, Jaap C

    2007-07-05

    Since the discovery of small-world and scale-free networks the study of complex systems from a network perspective has taken an enormous flight. In recent years many important properties of complex networks have been delineated. In particular, significant progress has been made in understanding the relationship between the structural properties of networks and the nature of dynamics taking place on these networks. For instance, the 'synchronizability' of complex networks of coupled oscillators can be determined by graph spectral analysis. These developments in the theory of complex networks have inspired new applications in the field of neuroscience. Graph analysis has been used in the study of models of neural networks, anatomical connectivity, and functional connectivity based upon fMRI, EEG and MEG. These studies suggest that the human brain can be modelled as a complex network, and may have a small-world structure both at the level of anatomical as well as functional connectivity. This small-world structure is hypothesized to reflect an optimal situation associated with rapid synchronization and information transfer, minimal wiring costs, as well as a balance between local processing and global integration. The topological structure of functional networks is probably restrained by genetic and anatomical factors, but can be modified during tasks. There is also increasing evidence that various types of brain disease such as Alzheimer's disease, schizophrenia, brain tumours and epilepsy may be associated with deviations of the functional network topology from the optimal small-world pattern.

  1. Theoretical analysis of the 2D thermal cloaking problem

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.; Spivak, Yu E.; Yashchenko, E. N.

    2017-01-01

    Coefficient inverse problems for the model of heat scattering with variable coefficients arising when developing technologies of design of thermal cloaking devices are considered. By the optimization method, these problems are reduced to respective control problems. The material parameters (radial and azimuthal conductivities) of the inhomogeneous anisotropic medium, filling the thermal cloak, play the role of control. The model of heat scattering acts as a functional restriction. A unique solvability of direct heat scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and efficient numerical algorithm of solving thermal cloaking problems is proposed.

  2. A Theoretical Analysis of Thermal Radiation from Neutron Stars

    NASA Technical Reports Server (NTRS)

    Applegate, James H.

    1993-01-01

    As soon as it was realized that the direct URCA process is allowed by many modern nuclear equation of state, an analysis of its effect on the cooling of neutron stars was undertaken. A primary study showed that the occurrence of the direct URCA process makes the surface temperature of a neutron star suddenly drop by almost an order of magnitude when the cold wave from the core reaches the surface when the star is a few years old. The results of this study are published in Page and Applegate. As a work in progress, we are presently extending the above work. Improved expressions for the effect of nucleon pairing on the neutrino emissivity and specific heat are now available, and we have incorporated them in a recalculation of rate of the direct URCA process.

  3. Theoretic analysis on electric conductance of nano-wire transistors

    NASA Astrophysics Data System (ADS)

    Tsai, N.-C.; Chiang, Y.-R.; Hsu, S.-L.

    2010-01-01

    By employing the commercial software nanoMos and Vienna ab Initio Simulation Package ( VASP), the performance of nano-wire field-effect transistors is investigated. In this paper, the Density-Gradient Model (DG Model) is used to describe the carrier transport behavior of the nano-wire transistor under quantum effects. The analysis of the drain current with respect to channel length, body dielectric constant and gate contact work function is presented. In addition, Fermi energy and DOS (Density of State) are introduced to explore the relative stability of carrier transport and electrical conductance for the silicon crystal with dopants. Finally, how the roughness of the surface of the silicon-based crystal is affected by dopants and their allocation can be illuminated by a few broken bonds between atoms near the skin of the crystal.

  4. Theoretical analysis of the density within an orbiting molecular shield

    NASA Technical Reports Server (NTRS)

    Hueser, J. E.; Brock, F. J.

    1976-01-01

    An analytical model based on the kinetic theory of a drifting Maxwellian gas is used to determine the nonequilibrium molecular density distribution within a hemispherical shell open aft with its axis parallel to its velocity. Separate numerical results are presented for the primary and secondary density distribution components due to the drifting Maxwellian gas for speed ratios between 2.5 and 10. An analysis is also made of the density component due to gas desorbed from the wall of the hemisphere, and numerical results are presented for the density distribution. It is shown that the adsorption process may be completely ignored. The results are applicable to orbital trajectories in any planet-atmosphere system and interplanetary transfer trajectories. Application to the earth's atmosphere is mentioned briefly.

  5. Theoretical analysis of the density within an orbiting molecular shield

    NASA Technical Reports Server (NTRS)

    Hueser, J. E.; Brock, F. J.

    1976-01-01

    An analytical model based on the kinetic theory of a drifting Maxwellian gas is used to determine the nonequilibrium molecular density distribution within a hemispherical shell open aft with its axis parallel to its velocity. Separate numerical results are presented for the primary and secondary density distribution components due to the drifting Maxwellian gas for speed ratios between 2.5 and 10. An analysis is also made of the density component due to gas desorbed from the wall of the hemisphere, and numerical results are presented for the density distribution. It is shown that the adsorption process may be completely ignored. The results are applicable to orbital trajectories in any planet-atmosphere system and interplanetary transfer trajectories. Application to the earth's atmosphere is mentioned briefly.

  6. Theoretical and numerical analysis of the corneal air puff test

    NASA Astrophysics Data System (ADS)

    Simonini, Irene; Angelillo, Maurizio; Pandolfi, Anna

    2016-08-01

    Ocular analyzers are used in the current clinical practice to estimate, by means of a rapid air jet, the intraocular pressure and other eye's parameters. In this study, we model the biomechanical response of the human cornea to the dynamic test with two approaches. In the first approach, the corneal system undergoing the air puff test is regarded as a harmonic oscillator. In the second approach, we use patient-specific geometries and the finite element method to simulate the dynamic test on surgically treated corneas. In spite of the different levels of approximation, the qualitative response of the two models is very similar, and the most meaningful results of both models are not significantly affected by the inclusion of viscosity of the corneal material in the dynamic analysis. Finite element calculations reproduce the observed snap-through of the corneal shell, including two applanate configurations, and compare well with in vivo images provided by ocular analyzers, suggesting that the mechanical response of the cornea to the air puff test is actually driven only by the elasticity of the stromal tissue. These observations agree with the dynamic characteristics of the test, since the frequency of the air puff impulse is several orders of magnitude larger than the reciprocal of any reasonable relaxation time for the material, downplaying the role of viscosity during the fast snap-through phase.

  7. Analysis of an information-theoretic model for communication

    NASA Astrophysics Data System (ADS)

    Dickman, Ronald; Moloney, Nicholas R.; Altmann, Eduardo G.

    2012-12-01

    We study the cost-minimization problem posed by Ferrer i Cancho and Solé in their model of communication that aimed at explaining the origin of Zipf’s law (2003 Proc. Nat. Acad. Sci. 100 788). Direct analysis shows that the minimum cost is min{λ,1 - λ}, where λ determines the relative weights of speaker’s and hearer’s costs in the total, as shown in several previous works using different approaches. The nature and multiplicity of the minimizing solution change discontinuously at λ = 1/2, being qualitatively different for λ < 1/2, λ > 1/2, and λ = 1/2. Zipf’s law is found only in a vanishing fraction of the minimum-cost solutions at λ = 1/2 and therefore is not explained by this model. Imposing the further condition of equal costs yields distributions substantially closer to Zipf’s law ones, but significant differences persist. We also investigate the solutions reached with the previously used minimization algorithm and find that they correctly recover global minimum states at the transition.

  8. Molecular magnetic properties of heteroporphyrins: a theoretical analysis.

    PubMed

    Campomanes, Pablo; Menéndez, María Isabel; Cárdenas-Jirón, Gloria Inés; Sordo, Tomás Luis

    2007-11-14

    B3LYP/6-31G(d) optimization of porphyrin, tetraphenylporphyrin and their 21,23-diheteroatom substituted derivatives with O, S, and Se heteroatoms was performed. A planar macrocycle was found in all cases except 21,23-dioxatetraphenylporphyrin which presents only slight deviations from planarity. A Bader analysis uncovers the presence of S-S and Se-Se interactions in the four corresponding heteroporphyrins, which appreciably distort the original unsubstituted macrocycles. In the minimum energy structures of heterotetraphenylporphyrins the four meso phenyl groups slant alternatively to right or left so that the two pairs of opposite phenyls present a staggered conformation. The pi current induced by a perpendicular magnetic field in porphyrin bifurcates across both types of pyrrole subunits but the presence of O, S and Se heteroatoms in 21,23-diheteroporphyrins causes a diminution of the current density through the inner section of the two heterorings and, consequently, the current path goes then through the outer section of these rings. The NICS values at the ring critical points of the heterorings are much larger (in absolute value) than those at the pyrrole ring critical points but appreciably smaller than that at the ring critical point of a pyrrole molecule. In agreement with experimental data the (1)H NMR present appreciable downfield shifts for the beta H atoms of the heterorings in the 21,23-heterosubstituted molecules.

  9. Spectral derivative analysis of solar spectroradiometric measurements: Theoretical basis

    NASA Astrophysics Data System (ADS)

    Hansell, R. A.; Tsay, S.-C.; Pantina, P.; Lewis, J. R.; Ji, Q.; Herman, J. R.

    2014-07-01

    Spectral derivative analysis, a commonly used tool in analytical spectroscopy, is described for studying cirrus clouds and aerosols using hyperspectral, remote sensing data. The methodology employs spectral measurements from the 2006 Biomass-burning Aerosols in Southeast Asia field study to demonstrate the approach. Spectral peaks associated with the first two derivatives of measured/modeled transmitted spectral fluxes are examined in terms of their shapes, magnitudes, and positions from 350 to 750 nm, where variability is largest. Differences in spectral features between media are mainly associated with particle size and imaginary term of the complex refractive index. Differences in derivative spectra permit cirrus to be conservatively detected at optical depths near the optical thin limit of ~0.03 and yield valuable insight into the composition and hygroscopic nature of aerosols. Biomass-burning smoke aerosols/cirrus generally exhibit positive/negative slopes, respectively, across the 500-700 nm spectral band. The effect of cirrus in combined media is to increase/decrease the slope as cloud optical thickness decreases/increases. For thick cirrus, the slope tends to 0. An algorithm is also presented which employs a two model fit of derivative spectra for determining relative contributions of aerosols/clouds to measured data, thus enabling the optical thickness of the media to be partitioned. For the cases examined, aerosols/clouds explain ~83%/17% of the spectral signatures, respectively, yielding a mean cirrus cloud optical thickness of 0.08 ± 0.03, which compared reasonably well with those retrieved from a collocated Micropulse Lidar Network Instrument (0.09 ± 0.04). This method permits extracting the maximum informational content from hyperspectral data for atmospheric remote sensing applications.

  10. Theoretical analysis of droplet transition from Cassie to Wenzel state

    NASA Astrophysics Data System (ADS)

    Liu, Tian-Qing; Yan-Jie, Li; Xiang-Qin, Li; Wei, Sun

    2015-11-01

    Whether droplets transit from the Cassie to the Wenzel state (C-W) on a textured surface is the touchstone that the superhydrophobicity of the surface is still maintained. However, the C-W transition mechanism, especially the spontaneous transition of small droplets, is still not very clear to date. The interface free energy gradient of a small droplet is firstly proposed and derived as the driving force for its C-W evolution in this study based on the energy and gradient analysis. Then the physical and mathematical model of the C-W transition is found after the C-W driving force or transition pressure, the resistance, and the parameters of the meniscus beneath the droplet are formulated. The results show that the micro/nano structural parameters significantly affect the C-W driving force and resistance. The smaller the pillar diameter and pitch, the minor the C-W transition pressure, and the larger the resistance. Consequently, the C-W transition is difficult to be completed for the droplets on nano-textured surfaces. Meanwhile if the posts are too short, the front of the curved liquid-air interface below the droplet will touch the structural substrate easily even though the three phase contact line (TPCL) has not depinned. When the posts are high enough, the TPCL beneath the drop must move firstly before the meniscus can reach the substrate. As a result, the droplet on a textured surface with short pillars is easy to complete its C-W evolution. On the other hand, the smaller the droplet, the easier the C-W shift, since the transition pressure becomes larger, which well explains why an evaporating drop will collapse spontaneously from composite to Wenzel state. Besides, both intrinsic and advancing contact angles affect the C-W transition as well. The greater the two angles, the harder the C-W transition. In the end, the C-W transition parameters and the critical conditions measured in literatures are calculated and compared, and the calculations accord well with

  11. Theoretical analysis of oxygen supply to contracted skeletal muscle.

    PubMed

    Groebe, K; Thews, G

    1986-01-01

    Honig and collaborators reported striking contradictions in current understanding of O2 supply to working skeletal muscle. Therefore we re-examined the problem by means of a new composite computer simulation. As inclusion of erythrocytic O2 desaturation and oxygen transport and consumption inside the muscle cell into a single model would entail immense numerical difficulties, we broke up the whole process into its several components: O2 desaturation of erythrocytes O2 transport and consumption in muscle fiber capillary transit time characterizing the period of contact between red cell and muscle fiber. "Erythrocyte model" as well as "muscle fiber model" both consist of a central core cylinder surrounded by a concentric diffusion layer representing the extracellular resistance to O2 diffusion (Fig. 1). Resistance layers in both models are to be conceived of as one and the same anatomical structure--even though in each model their shape is adapted to the respective geometry. By means of this overlap region a spatial connexion between both is given, whereas temporal coherence governing O2 fluxes and red cell spacing is derived from capillary transit time. Analysis of individual components is outlined as follows: Assuming axial symmetry of the problem a numerical algorithm was employed to solve the parabolic system of partial differential equations describing red cell O2 desaturation. Hb-O2 reaction kinetics, free and facilitated O2 diffusion in axial and radial directions, and red cell movement in capillary were considered. Resulting time courses of desaturation, which are considerably faster than the ones computed by Honig et al., are given in the following table (see also Fig. 3). (Formula: see text) Furthermore, we studied the respective importance of the several processes included in our model: Omission of longitudinal diffusion increased desaturation time by 15% to 23%, whereas effects of reaction kinetics and axial movement were 5% and 2% respectively. For time

  12. Global Study of the Simple Pendulum by the Homotopy Analysis Method

    ERIC Educational Resources Information Center

    Bel, A.; Reartes, W.; Torresi, A.

    2012-01-01

    Techniques are developed to find all periodic solutions in the simple pendulum by means of the homotopy analysis method (HAM). This involves the solution of the equations of motion in two different coordinate representations. Expressions are obtained for the cycles and periods of oscillations with a high degree of accuracy in the whole range of…

  13. A Simple Card Trick: Teaching Qualitative Data Analysis Using a Deck of Playing Cards

    ERIC Educational Resources Information Center

    Waite, Duncan

    2011-01-01

    Yet today, despite recent welcome additions, relatively little is written about teaching qualitative research. Why is that? This article reports out a relatively simple, yet appealing, pedagogical move, a lesson the author uses to teach qualitative data analysis. Data sorting and categorization, the use of tacit and explicit theory in data…

  14. Isolating the Effects of Training Using Simple Regression Analysis: An Example of the Procedure.

    ERIC Educational Resources Information Center

    Waugh, C. Keith

    This paper provides a case example of simple regression analysis, a forecasting procedure used to isolate the effects of training from an identified extraneous variable. This case example focuses on results of a three-day sales training program to improve bank loan officers' knowledge, skill-level, and attitude regarding solicitation and sale of…

  15. A Simple Gauss-Newton Procedure for Covariance Structure Analysis with High-Level Computer Languages.

    ERIC Educational Resources Information Center

    Cudeck, Robert; And Others

    1993-01-01

    An implementation of the Gauss-Newton algorithm for the analysis of covariance structure that is specifically adapted for high-level computer languages is reviewed. This simple method for estimating structural equation models is useful for a variety of standard models, as is illustrated. (SLD)

  16. A simple protocol for NMR analysis of the enantiomeric purity of chiral hydroxylamines.

    PubMed

    Tickell, David A; Mahon, Mary F; Bull, Steven D; James, Tony D

    2013-02-15

    A practically simple three-component chiral derivatization protocol for determining the enantiopurity of chiral hydroxylamines by (1)H NMR spectroscopic analysis is described, involving their treatment with 2-formylphenylboronic acid and enantiopure BINOL to afford a mixture of diastereomeric nitrono-boronate esters whose ratio is an accurate reflection of the enantiopurity of the parent hydroxylamine.

  17. A Simple Card Trick: Teaching Qualitative Data Analysis Using a Deck of Playing Cards

    ERIC Educational Resources Information Center

    Waite, Duncan

    2011-01-01

    Yet today, despite recent welcome additions, relatively little is written about teaching qualitative research. Why is that? This article reports out a relatively simple, yet appealing, pedagogical move, a lesson the author uses to teach qualitative data analysis. Data sorting and categorization, the use of tacit and explicit theory in data…

  18. A Simple Gauss-Newton Procedure for Covariance Structure Analysis with High-Level Computer Languages.

    ERIC Educational Resources Information Center

    Cudeck, Robert; And Others

    1993-01-01

    An implementation of the Gauss-Newton algorithm for the analysis of covariance structure that is specifically adapted for high-level computer languages is reviewed. This simple method for estimating structural equation models is useful for a variety of standard models, as is illustrated. (SLD)

  19. Global Study of the Simple Pendulum by the Homotopy Analysis Method

    ERIC Educational Resources Information Center

    Bel, A.; Reartes, W.; Torresi, A.

    2012-01-01

    Techniques are developed to find all periodic solutions in the simple pendulum by means of the homotopy analysis method (HAM). This involves the solution of the equations of motion in two different coordinate representations. Expressions are obtained for the cycles and periods of oscillations with a high degree of accuracy in the whole range of…

  20. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity

    Treesearch

    Harbin Li; Steven G. McNulty

    2007-01-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...

  1. Thermal analysis of a simple-cycle gas turbine in biogas power generation

    SciTech Connect

    Yomogida, D.E.; Thinh, Ngo Dinh

    1995-09-01

    This paper investigates the technical feasibility of utilizing small simple-cycle gas turbines (25 kW to 125 kW) for biogas power generation through thermal analysis. A computer code, GTPower, was developed to evaluate the performance of small simple-cycle gas turbines specifically for biogas combustion. The 125 KW Solar Gas Turbine (Tital series) has been selected as the base case gas turbine for biogas combustion. After its design parameters and typical operating conditions were entered into GTPower for analysis, GTPower outputted expected values for the thermal efficiency and specific work. For a sensitivity analysis, the GTPower Model outputted the thermal efficiency and specific work. For a sensitivity analysis, the GTPower Model outputted the thermal efficiency and specific work profiles for various operating conditions encountered in biogas combustion. These results will assist future research projects in determining the type of combustion device most suitable for biogas power generation.

  2. Modeling and Theoretical Analysis of On-Chip Phase-Sensitive Amplifiers

    DTIC Science & Technology

    2016-04-19

    SECURITY CLASSIFICATION OF: We performed a theoretical study of phase-sentisitive amplification in semiconductor optical amplifiers (SOAs), so as to...wavelength mixing in semiconductor optical amplifiers (SOAs) based on coupled-mode equations. The proposed model applies to all kinds of SOA...Unlimited UU UU UU UU 19-04-2016 1-Jun-2014 30-Nov-2015 Final Report: Modeling and Theoretical Analysis of On-Chip Phase-Sensitive Amplifiers The

  3. Theoretical and empirical dimensions of the Aberdeen Glaucoma Questionnaire: a cross sectional survey and principal component analysis

    PubMed Central

    2013-01-01

    Background To develop patient-reported outcome instruments, statistical techniques (e.g., principal components analysis; PCA) are used to examine correlations among items and identify interrelated item subsets (empirical factors). However, interpretation and labelling of empirical factors is a subjective process, lacking precision or conceptual basis. We report a novel and reproducible method for mapping between theoretical and empirical factor structures. We illustrate the method using the pilot Aberdeen Glaucoma Questionnaire (AGQ), a new measure of glaucoma-related disability developed using the International Classification of Functioning and Disability (ICF) as a theoretical framework and tested in a sample representing the spectrum of glaucoma severity. Methods We used the ICF to code AGQ item content before mailing the AGQ to a UK sample (N = 1349) selected to represent people with a risk factor for glaucoma and people with glaucoma across a range of severity. Reflecting uncertainty in the theoretical framework (items with multiple ICF codes), an exploratory PCA was conducted. The theoretical structure informed our interpretation of the empirical structure and guided the selection of theoretically-derived factor labels. We also explored the discrimination of the AGQ across glaucoma severity groups. Results 656 (49%) completed questionnaires were returned. The data yielded a 7-factor solution with a simple structure (using cut-off point of a loading of 0.5) that together accounted for 63% of variance in the scores. The mapping process resulted in allocation of the following theoretically-derived factor labels: 1) Seeing Functions: Participation; 2) Moving Around & Communication; 3) Emotional Functions; 4) Walking Around Obstacles; 5) Light; 6) Seeing Functions: Domestic & Social Life; 7) Mobility. Using the seven factor scores as independent variables in a discriminant function analysis, the AGQ scores resulted in correct glaucoma severity grading of 32

  4. Theoretical analysis of cell separation based on cell surface marker density.

    PubMed

    Chalmers, J J; Zborowski, M; Moore, L; Mandal, S; Fang, B B; Sun, L

    1998-07-05

    A theoretical analysis was performed to determine the number of fractions a multidisperse, immunomagnetically labeled cell population can be separated into based on the surface marker (antigen) density. A number of assumptions were made in this analysis: that there is a proportionality between the number of surface markers on the cell surface and the number of immunomagnetic labels bound; that this surface marker density is independent of the cell diameter; and that there is only the presence of magnetic and drag forces acting on the cell. Due to the normal distribution of cell diameters, a "randomizing" effect enters into the analysis, and an analogy between the "theoretical plate" analysis of distillation, adsorption, and chromatography can be made. Using the experimentally determined, normal distribution of cell diameters for human lymphocytes and a breast cancer cell line, and fluorescent activated cell screening data of specific surface marker distributions, examples of theoretical plate calculations were made and discussed.

  5. Chapter 2. General theoretical perspectives of narrative analysis of substance use-related dependency.

    PubMed

    Larsson, Sam; Lilja, John; von Braun, Therese; Sjöblom, Yvonne

    2013-11-01

    This chapter provides a short introduction to, and an overview for, using narrative analysis in the understanding of the use and misuse of alcohol and drugs. Important theoretical and methodological dimensions are discussed. Some tentative conclusions, limitations, and unresolved critical issues concerning the use of narrative research methods in the analysis of substance use-related dependency problems are also presented.

  6. A Theoretical Analysis of Potential Extinction Properties of Behavior-Specific Manual Restraint

    ERIC Educational Resources Information Center

    Cipani, Ennio; Thomas, Melvin; Martin, Daniel

    2007-01-01

    This paper will examine possible extinction properties of behavior-specific manual restraint. It will analyze the possibility of extinction being produced via restraint with respect to the target behavior's possible environmental functions. The theoretical analysis will involve the analysis of behavioral properties of restraint during two temporal…

  7. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    ERIC Educational Resources Information Center

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  8. Recent applications of theoretical analysis to V/STOL inlet design

    NASA Technical Reports Server (NTRS)

    Stockman, N. O.

    1979-01-01

    The theoretical analysis methods, potential flow, and boundary layer, used at Lewis are described. Recent application to Navy V/STOL aircraft, both fixed and tilt nacelle configurations, are presented. A three dimensional inlet analysis computer program is described and preliminary results presented. An approach to optimum design of inlets for high angle of attack operations is dicussed.

  9. Theoretical analysis of the particle gradient distribution in centrifugal field during solidification

    SciTech Connect

    Liu, Q.; Jiao, Y.; Yang, Y.; Hu, Z.

    1996-12-01

    A theoretical analysis is presented to obtain gradient distribution of particles in centrifugal field, by which the particle distribution in gradient composite can be predicted. Particle movement in liquid is described and gradient distribution of particles in composite is calculated in a centrifugal field during the solidification. The factors which affect the particle distribution and its gradient are discussed in detail. The theoretical analysis indicated that a composite zone and a blank zone exist in gradient composite, which can be controlled to the outside or inside of the tubular composite by the density difference of particle and liquid metal. The comparison of the SiC particle distribution in Al matrix composite produced by centrifugal casting between the theory model and the experiment denotes that the theoretical analysis is reasonable.

  10. A simple polyol-free synthesis route to Gd2O3 nanoparticles for MRI applications: an experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Ahrén, Maria; Selegård, Linnéa; Söderlind, Fredrik; Linares, Mathieu; Kauczor, Joanna; Norman, Patrick; Käll, Per-Olov; Uvdal, Kajsa

    2012-08-01

    Chelated gadolinium ions, e.g., Gd-DTPA, are today used clinically as contrast agents for magnetic resonance imaging (MRI). An attractive alternative contrast agent is composed of gadolinium oxide nanoparticles as they have shown to provide enhanced contrast and, in principle, more straightforward molecular capping possibilities. In this study, we report a new, simple, and polyol-free way of synthesizing 4-5-nm-sized Gd2O3 nanoparticles at room temperature, with high stability and water solubility. The nanoparticles induce high-proton relaxivity compared to Gd-DTPA showing r 1 and r 2 values almost as high as those for free Gd3+ ions in water. The Gd2O3 nanoparticles are capped with acetate and carbonate groups, as shown with infrared spectroscopy, near-edge X-ray absorption spectroscopy, X-ray photoelectron spectroscopy and combined thermogravimetric and mass spectroscopy analysis. Interpretation of infrared spectroscopy data is corroborated by extensive quantum chemical calculations. This nanomaterial is easily prepared and has promising properties to function as a core in a future contrast agent for MRI.

  11. An in-depth analysis of theoretical frameworks for the study of care coordination1

    PubMed Central

    Van Houdt, Sabine; Heyrman, Jan; Vanhaecht, Kris; Sermeus, Walter; De Lepeleire, Jan

    2013-01-01

    Introduction Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical frameworks and clarify key concepts related to care coordination. Methods We performed a literature review to update existing theoretical frameworks. An in-depth analysis of these theoretical frameworks was conducted to formulate key concepts related to care coordination. Results Our literature review found seven previously unidentified theoretical frameworks for studying care coordination. The in-depth analysis identified fourteen key concepts that the theoretical frameworks addressed. These were ‘external factors’, ‘structure’, ‘tasks characteristics’, ‘cultural factors’, ‘knowledge and technology’, ‘need for coordination’, ‘administrative operational processes’, ‘exchange of information’, ‘goals’, ‘roles’, ‘quality of relationship’, ‘patient outcome’, ‘team outcome’, and ‘(inter)organizational outcome’. Conclusion These 14 interrelated key concepts provide a base to develop or choose a framework for studying care coordination. The relational coordination theory and the multi-level framework are interesting as these are the most comprehensive. PMID:23882171

  12. Quantitative Analysis of the Nanopore Translocation Dynamics of Simple Structured Polynucleotides

    PubMed Central

    Schink, Severin; Renner, Stephan; Alim, Karen; Arnaut, Vera; Simmel, Friedrich C.; Gerland, Ulrich

    2012-01-01

    Nanopore translocation experiments are increasingly applied to probe the secondary structures of RNA and DNA molecules. Here, we report two vital steps toward establishing nanopore translocation as a tool for the systematic and quantitative analysis of polynucleotide folding: 1), Using α-hemolysin pores and a diverse set of different DNA hairpins, we demonstrate that backward nanopore force spectroscopy is particularly well suited for quantitative analysis. In contrast to forward translocation from the vestibule side of the pore, backward translocation times do not appear to be significantly affected by pore-DNA interactions. 2), We develop and verify experimentally a versatile mesoscopic theoretical framework for the quantitative analysis of translocation experiments with structured polynucleotides. The underlying model is based on sequence-dependent free energy landscapes constructed using the known thermodynamic parameters for polynucleotide basepairing. This approach limits the adjustable parameters to a small set of sequence-independent parameters. After parameter calibration, the theoretical model predicts the translocation dynamics of new sequences. These predictions can be leveraged to generate a baseline expectation even for more complicated structures where the assumptions underlying the one-dimensional free energy landscape may no longer be satisfied. Taken together, backward translocation through α-hemolysin pores combined with mesoscopic theoretical modeling is a promising approach for label-free single-molecule analysis of DNA and RNA folding. PMID:22225801

  13. Predicting excitonic gaps of semiconducting single-walled carbon nanotubes from a field theoretic analysis

    DOE PAGES

    Konik, Robert M.; Sfeir, Matthew Y.; Misewich, James A.

    2015-02-17

    We demonstrate that a non-perturbative framework for the treatment of the excitations of single walled carbon nanotubes based upon a field theoretic reduction is able to accurately describe experiment observations of the absolute values of excitonic energies. This theoretical framework yields a simple scaling function from which the excitonic energies can be read off. This scaling function is primarily determined by a single parameter, the charge Luttinger parameter of the tube, which is in turn a function of the tube chirality, dielectric environment, and the tube's dimensions, thus expressing disparate influences on the excitonic energies in a unified fashion. Asmore » a result, we test this theory explicitly on the data reported in [NanoLetters 5, 2314 (2005)] and [Phys. Rev. B 82, 195424 (2010)] and so demonstrate the method works over a wide range of reported excitonic spectra.« less

  14. Predicting excitonic gaps of semiconducting single-walled carbon nanotubes from a field theoretic analysis

    SciTech Connect

    Konik, Robert M.; Sfeir, Matthew Y.; Misewich, James A.

    2015-02-17

    We demonstrate that a non-perturbative framework for the treatment of the excitations of single walled carbon nanotubes based upon a field theoretic reduction is able to accurately describe experiment observations of the absolute values of excitonic energies. This theoretical framework yields a simple scaling function from which the excitonic energies can be read off. This scaling function is primarily determined by a single parameter, the charge Luttinger parameter of the tube, which is in turn a function of the tube chirality, dielectric environment, and the tube's dimensions, thus expressing disparate influences on the excitonic energies in a unified fashion. As a result, we test this theory explicitly on the data reported in [NanoLetters 5, 2314 (2005)] and [Phys. Rev. B 82, 195424 (2010)] and so demonstrate the method works over a wide range of reported excitonic spectra.

  15. Predicting excitonic gaps of semiconducting single-walled carbon nanotubes from a field theoretic analysis

    NASA Astrophysics Data System (ADS)

    Konik, Robert M.; Sfeir, Matthew Y.; Misewich, James A.

    2015-02-01

    We demonstrate that a nonperturbative framework for the treatment of the excitations of single-walled carbon nanotubes based upon a field theoretic reduction is able to accurately describe experiment observations of the absolute values of excitonic energies. This theoretical framework yields a simple scaling function from which the excitonic energies can be read off. This scaling function is primarily determined by a single parameter, the charge Luttinger parameter of the tube, which is in turn a function of the tube chirality, dielectric environment, and the tube's dimensions, thus expressing disparate influences on the excitonic energies in a unified fashion. We test this theory explicitly on the data reported by Dukovic et al. [Nano Lett. 5, 2314 (2005), 10.1021/nl0518122] and Sfeir et al. [Phys. Rev. B 82, 195424 (2010), 10.1103/PhysRevB.82.195424] and so demonstrate the method works over a wide range of reported excitonic spectra.

  16. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data

    PubMed Central

    Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062

  17. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.

    PubMed

    Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.

  18. Theoretical study on the torsional direction of simple ethylenoids after electronic relaxation at the conical intersection in the cis-trans photoisomerization

    NASA Astrophysics Data System (ADS)

    Amatatsu, Yoshiaki

    2015-07-01

    The conical intersections (CIXs) for the cis-trans photoisomerization of simple ethylenoids, such as ethylene, styrene and stilbene, have been calculated by complete active space self-consistent-field method. This is because we check if a simple relationship which is found in the case of fluorene-based ethylenoids is also true for simple ethylenoids. Thereby, the four CIXs, which are distinguished by the directions of wagging and rocking motions of the anionic part against the ethylenic bond, are found to be related with the torsional direction of the ethylenic bond after electronic relaxation at each CIX.

  19. A simple and precise method for quantitative analysis of lumefantrine by planar chromatography

    PubMed Central

    Hamrapurkar, Purnima; Phale, Mitesh; Pawar, Sandeep; Patil, Priti; Gandhi, Mittal

    2010-01-01

    A simple, precise and sensitive high performance thin layer chromatographic (HPTLC) method has been developed and validated for drug of choice Lumefantrine in treatment of malaria (P. falciparum). Silica gel 60 F254 HPTLC precoated plates were used for quantitative analytical purpose. Methanol water 9.5 + 0.5 (v/v) was used as the solvent system. Densitometric scanning was carried out with deuterium lamp set at detection wavelength of 266 nm. The response to lumefantrine concentration was linear in the concentration range of 1.25-12.50 μg/ml. The suitability of the method developed and validated was in accordance with the requirements of the ICH guidelines (Q2B). Thus the validated method can be further applied to quantitative analysis of lumefantrine in commercial pharmaceutical dosage form. The proposed method is simple, sensitive, precise and accurate, confirming its pharmaceutical application in routine quality control analysis. PMID:23781415

  20. Petri nets modeling and analysis using extended bag-theoretic relational algebra.

    PubMed

    Kim, Y C; Kim, T G

    1996-01-01

    Petri nets are a powerful modeling tool for studying reactive, concurrent systems. Analysis of the nets can reveal important information concerning the behavior of a modeled system. While various means for the analysis of the nets has been developed, a major limitation in the analysis, is explosion of large states space in simulation. An efficient method to manage large states space would overcome such a limitation. This paper proposes a framework for the modeling and analysis of Petri nets using relational database technologies. Formalism of the framework is based on a bag-theoretic relational algebra extended from the conventional, Within the framework, Petri nets are formalized by bag relations, and analysis algorithms are developed based on such formal relations. Properties associated with the nets are formalized by queries described in terms of the bag-theoretic relational algebra. The framework has been realized in a commercial relational database system using a standard SQL.

  1. An application of a simple computer program for neutron activation analysis.

    PubMed

    Abdel Basset, N

    2001-01-01

    A simple computer program is designed for estimation of elemental concentration values in complex samples by neutron activation analysis technique. The program is applied for an Egyptian cement sample which irradiated at the Egyptian Research Reactor-1(ET-RR-1). The data obtained is compared with the reported values. The time consumed for such calculations has a remarkable reduction in comparison with the routine work.

  2. Simple Expressions to Predict Actual Performance of Vapor Compression Air-conditioner Concerning Heat Island Analysis

    NASA Astrophysics Data System (ADS)

    Shinomiya, Naruaki; Nishimura, Nobuya; Iyota, Hiroyuki

    Simple expressions to predict actual performance of vapor compression air-conditioners such as room air-conditioners and multi-split type air-conditioners are proposed. Coefficient of performance is expressed in these simple expressions as a function of outdoor temperature, indoor temperature, air-conditioning load rate, air flow rate of indoor unit (in case of room air-conditioners) and difference in height between indoor unit and outdoor unit (in case of multi-split type air-conditioners). Those simple expressions are obtained by neural net work analysis (Rule extraction method from facts, RF5). Actual performance of air-conditioners which are used for training data and teaching data of net work are obtained by numerical simulations developed by the authors. Calculation results of these simple expressions usually agree with experimental values of other researchers. Furthermore, amount of exhaust heat from air-conditioners calculated with these expressions are 10% or lower than traditional approach: calculated with constant value of coefficient of performance.

  3. Theoretical analysis of a ceramic plate thickness-shear mode piezoelectric transformer.

    PubMed

    Xu, Limei; Zhang, Ying; Fan, Hui; Hu, Junhui; Yang, Jiashi

    2009-03-01

    We perform a theoretical analysis on a ceramic plate piezoelectric transformer operating with thickness-shear modes. Mindlin's first-order theory of piezoelectric plates is employed, and a forced vibration solution is obtained. Transforming ratio, resonant frequencies, and vibration mode shapes are calculated, and the effects of plate thickness and electrode dimension are examined.

  4. A Theoretical Analysis of Social Interactions in Computer-based Learning Environments: Evidence for Reciprocal Understandings.

    ERIC Educational Resources Information Center

    Jarvela, Sanna; Bonk, Curtis Jay; Lehtinen, Erno; Lehti, Sirpa

    1999-01-01

    Presents a theoretical and empirical analysis of social interactions in computer-based learning environments. Explores technology use to support reciprocal understanding between teachers and students based on three technology-based learning environments in Finland and the United States, and discusses situated learning, cognitive apprenticeships,…

  5. Forming Future Specialists' Valeological Competency: Theoretical Analysis of Domestic and Foreign Scholars' Views

    ERIC Educational Resources Information Center

    Maksymchuk, Borys

    2016-01-01

    The article deals with the analysis of theoretical and methodical principles of forming students' valeological competency in the process of physical education in higher pedagogical education institutions in domestic and foreign scientific literature. It has been defined that one of the most prominent factors in future teachers' training for…

  6. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  7. The theoretical analysis of an instrument for linear and angular displacements of the steered wheel measuring

    NASA Astrophysics Data System (ADS)

    Wach, K.

    2016-09-01

    In the paper the theoretical analysis of the measuring instrument for determination of translation and rotation of the stub axle with the steered wheel against car body was presented. The instrument is made of nine links with elongation sensors embedded in it. One of several possible structures of instrument of this kind was presented. Basing on solution of the geometrical constraints system of equations of the device, the numerical analysis of the measurement accuracy was conducted.

  8. Multisensory Bayesian Inference Depends on Synapse Maturation during Training: Theoretical Analysis and Neural Modeling Implementation.

    PubMed

    Ursino, Mauro; Cuppini, Cristiano; Magosso, Elisa

    2017-03-01

    Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding-the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs

  9. Theoretical analysis of BER performance of nonlinearly amplified FBMC/OQAM and OFDM signals

    NASA Astrophysics Data System (ADS)

    Bouhadda, Hanen; Shaiek, Hmaied; Roviras, Daniel; Zayani, Rafik; Medjahdi, Yahia; Bouallegue, Ridha

    2014-12-01

    In this paper, we introduce an analytical study of the impact of high-power amplifier (HPA) nonlinear distortion (NLD) on the bit error rate (BER) of multicarrier techniques. Two schemes of multicarrier modulations are considered in this work: the classical orthogonal frequency division multiplexing (OFDM) and the filter bank-based multicarrier using offset quadrature amplitude modulation (FBMC/OQAM), including different HPA models. According to Bussgang's theorem, the in-band NLD is modeled as a complex gain in addition to an independent noise term for a Gaussian input signal. The BER performance of OFDM and FBMC/OQAM modulations, transmitting over additive white Gaussian noise (AWGN) and Rayleigh fading channels, is theoretically investigated and compared to simulation results. For simple HPA models, such as the soft envelope limiter, it is easy to compute the BER theoretical expression. However, for other HPA models or for real measured HPA, BER derivation is generally intractable. In this paper, we propose a general method based on a polynomial fitting of the HPA characteristics and we give theoretical expressions for the BER for any HPA model.

  10. Simple dual-spotting procedure enhances nLC-MALDI MS/MS analysis of digests with less specific enzymes.

    PubMed

    Baeumlisberger, Dominic; Rohmer, Marion; Arrey, Tabiwang N; Mueller, Benjamin F; Beckhaus, Tobias; Bahr, Ute; Barka, Guenes; Karas, Michael

    2011-06-03

    The beneficial effect of high mass accuracy in mass spectrometry is especially pronounced when using less specific enzymes as the number of theoretically possible peptides increases dramatically without any cleavage specificity defined. Together with a preceding chromatographic separation, high-resolution mass spectrometers such as the MALDI-LTQ-Orbitrap are therefore well suited for the analysis of protein digests with less specific enzymes. A combination with fast, automated, and informative MALDI-TOF/TOF analysis has already been shown to yield increased total peptide and protein identifications. Here, a simple method for nLC separation and subsequent alternating spotting on two targets for both a MALDI-LTQ-Orbitrap and a MALDI-TOF/TOF instrument is introduced. This allows for simultaneous measurements on both instruments and subsequent combination of both data sets by an in-house written software tool. The performance of this procedure was evaluated using a mixture of four standard proteins digested with elastase. Three replicate runs were examined concerning repeatability and the total information received from both instruments. A cytosolic extract of C. glutamicum was used to demonstrate the applicability to more complex samples. Database search results showed that an additional 32.3% of identified peptides were found using combined data sets in comparison to MALDI-TOF/TOF data sets.

  11. [Theoretical analysis and experimental measurement for secondary electron yield of microchannel plate in extreme ultraviolet region].

    PubMed

    Li, Min; Ni, Qi-liang; Dong, Ning-ning; Chen, Bo

    2010-08-01

    Photon counting detectors based on microchannel plate have widespread applications in astronomy. The present paper deeply studies secondary electron of microchannel plate in extreme ultraviolet. A theoretical model describing extreme ultraviolet-excited secondary electron yield is presented, and the factor affecting on the secondary electron yields of both electrode and lead glass which consist of microchannel plate is analyzed according to theoretical formula derived from the model. The result shows that the higher secondary electron yield is obtained under appropriate condition that the thickness of material is more than 20 nm and the grazing incidence angle is larger than the critical angle. Except for several wavelengths, the secondary electron yields of both electrode and lead glass decrease along with the increase in the wavelength And also the quantum efficiency of microchannel plate is measured using quantum efficiency test set-up with laser-produced plasmas source as an extreme ultraviolet radiation source, and the result of experiment agrees with theoretical analysis.

  12. Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2007-01-01

    We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.

  13. Theoretical analysis and experimental research on the finline ferrite isolator (abstract)

    NASA Astrophysics Data System (ADS)

    Zhu, Sheng-chuan; Hao, Yan-ming; Zhang, Yao-xi

    1991-04-01

    Recently, the finline ferrite devices have attracted people's attention. Beyer et al. have done theoretical and experimental researches for a finline isolator.1,2 In order to make it convenient for theoretical design and experimental adjustment of this device, we have developed a synthetical theory and have carried out the experimental research successfully.3 In this paper, we have done the further theoretical analysis and experimental researches for the finline ferrite isolator, such as the impedance matching, the effects of device structure on performances, and the transplantation of a waveguide isolator to a finline isolator problem. Good agreement between design and experiment is obtained. The performances of a X-band finline isolator are as follows A+<1.5 dB, A-≳18 dB, VSWR<1.5 in a 8% bandwidth, and the bia-field is lower (about 1000 Oe).

  14. Meta-analysis of mismatch negativity to simple versus complex deviants in schizophrenia.

    PubMed

    Avissar, Michael; Xie, Shanghong; Vail, Blair; Lopez-Calderon, Javier; Wang, Yuanjia; Javitt, Daniel C

    2017-07-11

    Mismatch negativity (MMN) deficits in schizophrenia (SCZ) have been studied extensively since the early 1990s, with the vast majority of studies using simple auditory oddball task deviants that vary in a single acoustic dimension such as pitch or duration. There has been a growing interest in using more complex deviants that violate more abstract rules to probe higher order cognitive deficits. It is still unclear how sensory processing deficits compare to and contribute to higher order cognitive dysfunction, which can be investigated with later attention-dependent auditory event-related potential (ERP) components such as a subcomponent of P300, P3b. In this meta-analysis, we compared MMN deficits in SCZ using simple deviants to more complex deviants. We also pooled studies that measured MMN and P3b in the same study sample and examined the relationship between MMN and P3b deficits within study samples. Our analysis reveals that, to date, studies using simple deviants demonstrate larger deficits than those using complex deviants, with effect sizes in the range of moderate to large. The difference in effect sizes between deviant types was reduced significantly when accounting for magnitude of MMN measured in healthy controls. P3b deficits, while large, were only modestly greater than MMN deficits (d=0.21). Taken together, our findings suggest that MMN to simple deviants may still be optimal as a biomarker for SCZ and that sensory processing dysfunction contributes significantly to MMN deficit and disease pathophysiology. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Engineering design and theoretical analysis of nanoporous carbon membranes for gas separation

    NASA Astrophysics Data System (ADS)

    Acharya, Madhav

    1999-11-01

    Gases are used in a direct or indirect manner in virtually every major industry, such as steel manufacturing, oil production, foodstuffs and electronics. Membranes are being investigated as an alternative to established methods of gas separation such as pressure swing adsorption and cryogenic distillation. Membranes can be used in continuous operation and work very well at ambient conditions, thus representing a tremendous energy and economic saving over the other technologies. In addition, the integration of reaction and separation into a single unit known as a membrane reactor has the potential to revolutionize the chemical industry by making selective reactions a reality. Nanoporous carbons are highly disordered materials obtained from organic polymers or natural sources. They have the ability to separate gas molecules by several different mechanisms, and hence there is a growing effort to form them into membranes. In this study, nanoporous carbon membranes were prepared on macroporous stainless steel supports of both tubular and disk geometries. The precursor used was poly(furfuryl alcohol) and different synthesis protocols were employed. A spray coating method also was developed which allowed reproducible synthesis of membranes with very few defects. High gas selectivities were obtained such as O2/N2 = 6, H2/C2H 4 = 70 and CO2/N2 = 20. Membranes also were characterized using SEM and AFM, which revealed thin layers of carbon that were quite uniform and homogeneous. The simulation of nanoporous carbon structures also was carried out using a simple algorithmic approach. 5,6 and 7-membered rings were introduced into the structure, thus resulting in considerable curvature. The density of the structures were calculated and found to compare favorably with experimental findings. Finally, a theoretical analysis of size selective transport was performed using transition state theory concepts. A definite correlation of gas permeance with molecular size was obtained after

  16. Simple imputation methods versus direct likelihood analysis for missing item scores in multilevel educational data.

    PubMed

    Kadengye, Damazo T; Cools, Wilfried; Ceulemans, Eva; Van den Noortgate, Wim

    2012-06-01

    Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.

  17. The Simple Lamb Wave Analysis to Characterize Concrete Wide Beams by the Practical MASW Test.

    PubMed

    Lee, Young Hak; Oh, Taekeun

    2016-06-02

    In recent years, the Lamb wave analysis by the multi-channel analysis of surface waves (MASW) for concrete structures has been an effective nondestructive evaluation, such as the condition assessment and dimension identification by the elastic wave velocities and their reflections from boundaries. This study proposes an effective Lamb wave analysis by the practical application of MASW to concrete wide beams in an easy and simple manner in order to identify the dimension and elastic wave velocity (R-wave) for the condition assessment (e.g., the estimation of elastic properties). This is done by identifying the zero-order antisymmetric (A0) and first-order symmetric (S1) modes among multimodal Lamb waves. The MASW data were collected on eight concrete wide beams and compared to the actual depth and to the pressure (P-) wave velocities collected for the same specimen. Information is extracted from multimodal Lamb wave dispersion curves to obtain the elastic stiffness parameters and the thickness of the concrete structures. Due to the simple and cost-effective procedure associated with the MASW processing technique, the characteristics of several fundamental modes in the experimental Lamb wave dispersion curves could be measured. Available reference data are in good agreement with the parameters that were determined by our analysis scheme.

  18. MOD-score analysis with simple pedigrees: an overview of likelihood-based linkage methods.

    PubMed

    Strauch, Konstantin

    2007-01-01

    A MOD-score analysis, in which the parametric LOD score is maximized with respect to the trait-model parameters, can be a powerful method for the mapping of complex traits. With affected sib pairs, it has been shown before that MOD scores asymptotically follow a mixture of chi(2) distributions with 2, 1 and 0 degrees of freedom under the null hypothesis of no linkage. In that context, a MOD-score analysis yields some (albeit limited) information regarding the trait-model parameters, and there is a chance for an increased power compared to a simple LOD-score analysis. Here, it is shown that with unilineal affected relative pairs, MOD scores asymptotically follow a mixture of chi(2) distributions with 1 and 0 degrees of freedom under the null hypothesis, that is, the same distribution as followed by simple LOD scores. No information regarding the trait model can be obtained in this setting, and no power is gained when compared to a LOD-score analysis. An outlook to larger pedigrees is given. The number of degrees of freedom underlying the null distribution of MOD scores, that depends on the type of pedigrees studied, corresponds to the number of explored dimensions related to power and to the number of parameters that can jointly be estimated. Copyright 2007 S. Karger AG, Basel.

  19. Numerical and Theoretical Analysis of Plastic Response of 5A06 Aluminum Circular Plates Subjected to Underwater Explosion Loading

    NASA Astrophysics Data System (ADS)

    Ren, Peng; Zhang, Wei

    2013-06-01

    Dynamic response analysis of structures subjected to underwater explosion loading has been always an interesting field for researchers. Understanding the deformation and failure mechanism of simple structures plays an important role in an actual project under this kind of loading. In this paper, the deformation and failure characteristics of 5A06 aluminum circular plates were investigated computationally and theoretically. The computational study was based on a Johnson-cook material parameter mode which was obtained from several previous studies provides a good description of deformation and failure of 5A06 aluminum circular plates under underwater explosion loading. The deformation history of the clamped circular plate is recorded; the maximum deflection and the thickness reduction measurements of target plates at different radii were conducted. The computational approach provided insight into the relationship between the failure mechanism and the strength of impact wave, and a computing formulae for strain field of the specimen was derived based on the same volume principle and rigid-plastic assumption. The simulation and theoretical calculation results are in good agreement with the experiments results. National Natural Science Foundation of China (NO:11272057).

  20. Development of a code system DEURACS for theoretical analysis and prediction of deuteron-induced reactions

    NASA Astrophysics Data System (ADS)

    Nakayama, Shinsuke; Kouno, Hiroshi; Watanabe, Yukinobu; Iwamoto, Osamu; Ye, Tao; Ogata, Kazuyuki

    2017-09-01

    We have developed an integrated code system dedicated for theoretical analysis and prediction of deuteron-induced reactions, which is called DEUteron-induced Reaction Analysis Code System (DEURACS). DEURACS consists of several calculation codes based on theoretical models to describe respective reaction mechanisms and it was successfully applied to (d,xp) and (d,xn) reactions. In the present work, the analysis of (d,xn) reactions is extended to higher incident energy up to nearly 100 MeV and also DEURACS is applied to (d,xd) reactions at 80 and 100 MeV. The DEURACS calculations reproduce the experimental double-differential cross sections for the (d,xn) and (d,xd) reactions well.

  1. Simple Machines Made Simple.

    ERIC Educational Resources Information Center

    St. Andre, Ralph E.

    Simple machines have become a lost point of study in elementary schools as teachers continue to have more material to cover. This manual provides hands-on, cooperative learning activities for grades three through eight concerning the six simple machines: wheel and axle, inclined plane, screw, pulley, wedge, and lever. Most activities can be…

  2. Simple Machines Made Simple.

    ERIC Educational Resources Information Center

    St. Andre, Ralph E.

    Simple machines have become a lost point of study in elementary schools as teachers continue to have more material to cover. This manual provides hands-on, cooperative learning activities for grades three through eight concerning the six simple machines: wheel and axle, inclined plane, screw, pulley, wedge, and lever. Most activities can be…

  3. Experimental and theoretical analysis of a hybrid solar thermoelectric generator with forced convection cooling

    NASA Astrophysics Data System (ADS)

    Sundarraj, Pradeepkumar; Taylor, Robert A.; Banerjee, Debosmita; Maity, Dipak; Sinha Roy, Susanta

    2017-01-01

    Hybrid solar thermoelectric generators (HSTEGs) have garnered significant research attention recently due to their potential ability to cogenerate heat and electricity. In this paper, theoretical and experimental investigations of the electrical and thermal performance of a HSTEG system are reported. In order to validate the theoretical model, a laboratory scale HSTEG system (based on forced convection cooling) is developed. The HSTEG consists of six thermoelectric generator modules, an electrical heater, and a stainless steel cooling block. Our experimental analysis shows that the HSTEG is capable of producing a maximum electrical power output of 4.7 W, an electrical efficiency of 1.2% and thermal efficiency of 61% for an average temperature difference of 92 °C across the TEG modules with a heater power input of 382 W. These experimental results of the HSTEG system are found to be in good agreement with the theoretical prediction. This experimental/theoretical analysis can also serve as a guide for evaluating the performance of the HSTEG system with forced convection cooling.

  4. A simple, objective analysis scheme for scatterometer data. [Seasat A satellite observation of wind over ocean

    NASA Technical Reports Server (NTRS)

    Levy, G.; Brown, R. A.

    1986-01-01

    A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.

  5. Simple sequence repeat-based association analysis of fruit traits in eggplant (Solanum melongena).

    PubMed

    Ge, H Y; Liu, Y; Zhang, J; Han, H Q; Li, H Z; Shao, W T; Chen, H Y

    2013-11-18

    Association mapping based on linkage disequilibrium (LD) provides a promising tool to identify quantitative trait loci (QTLs) in plant resources. A total of 141 eggplant (Solanum melongena L.) accessions were selected to detect simple sequence repeat (SSR) markers associated with nine fruit traits. Population structure analysis was performed with 105 SSR markers, which revealed that two subgroups were present in this population. LD analysis exhibited an extensive long-range LD of approximately 11 cM. A total of 49 marker associations related to eight phenotypic traits were identified to involve 24 different markers, although no association was found with the trait of fruit glossiness. To our knowledge, this is the 1st approach to use a genome-wide association study in eggplant with SSR markers. These results suggest that the association analysis approach could be a useful alternative to traditional linkage mapping to detect putative QTLs in eggplant.

  6. A simple optogenetic system for behavioral analysis of freely moving small animals.

    PubMed

    Kawazoe, Yuya; Yawo, Hiromu; Kimura, Koutarou D

    2013-01-01

    We present a new and simple optogenetic system for the behavioral analysis of small animals. This system includes a strong LED ring array, a high-resolution CCD camera, and the improved channelrhodopsin ChRGR. We used the system for behavioral analysis with the nematode Caenorhabditis elegans as a model, and we found that it can stimulate ChRGR expressed in the body wall muscles of the animals to modulate the behavior. Our results indicate that this system may be suitable for optogenetic behavioral analysis of freely moving small animals under various conditions to understand the principles underlying brain functions. Copyright © 2012 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  7. A simple, objective analysis scheme for scatterometer data. [Seasat A satellite observation of wind over ocean

    NASA Technical Reports Server (NTRS)

    Levy, G.; Brown, R. A.

    1986-01-01

    A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.

  8. In- silico exploration of thirty alphavirus genomes for analysis of the simple sequence repeats

    PubMed Central

    Alam, Chaudhary Mashhood; Singh, Avadhesh Kumar; Sharfuddin, Choudhary; Ali, Safdar

    2014-01-01

    The compilation of simple sequence repeats (SSRs) in viruses and its analysis with reference to incidence, distribution and variation would be instrumental in understanding the functional and evolutionary aspects of repeat sequences. Present study encompasses the analysis of SSRs across 30 species of alphaviruses. The full length genome sequences, assessed from NCBI were used for extraction and analysis of repeat sequences using IMEx software. The repeats of different motif sizes (mono- to penta-nucleotide) observed therein exhibited variable incidence across the species. Expectedly, mononucleotide A/T was the most prevalent followed by dinucleotide AG/GA and trinucleotide AAG/GAA in these genomes. The conversion of SSRs to imperfect microsatellite or compound microsatellite (cSSR) is low. cSSR, primarily constituted by variant motifs accounted for up to 12.5% of the SSRs. Interestingly, seven species lacked cSSR in their genomes. However, the SSR and cSSR are predominantly localized to the coding region ORFs for non structural protein and structural proteins. The relative frequencies of different classes of simple and compound microsatellites within and across genomes have been highlighted. PMID:25606453

  9. Use of a simple enzymatic assay for cholesterol analysis in human bile.

    PubMed

    Fromm, H; Amin, P; Klein, H; Kupke, I

    1980-02-01

    An enzymatic technique for cholesterol analysis in serum was applied to human bile. The analytical yield was very satisfactory in experiments in which known amounts of cholesterol were added to untreated, as well as Millipore-filtered, samples of human bile. The analytical results of the enzymatic test agreed closely with those of a method utilizing the Liebermann-Burchard reaction. The enzymatic assay of cholesterol in bile proved to be sensitive and precise. In comparison to other methods of biliary cholesterol determination, it has the advantage of being rapid and simple.

  10. Simple, specific analysis of organophosphorus and carbamate pesticides in sediments using column extraction and gas chromatography

    USGS Publications Warehouse

    Belisle, A.A.; Swineford, D.M.

    1988-01-01

    A simple, specific procedure was developed for the analysis of organophosphorus and carbamate pesticides in sediment. The wet soil was mixed with anhydrous sodium sulfate to bind water and the residues were column extracted in acetone:methylene chloride (1:l,v/v). Coextracted water was removed by additional sodium sulfate packed below the sample mixture. The eluate was concentrated and analyzed directly by capillary gas chromatography using phosphorus and nitrogen specific detectors. Recoveries averaged 93 % for sediments extracted shortly after spiking, but decreased significantly as the samples aged.

  11. Enumeration and stability analysis of simple periodic orbits in β-Fermi Pasta Ulam lattice

    SciTech Connect

    Sonone, Rupali L. Jain, Sudhir R.

    2014-04-24

    We study the well-known one-dimensional problem of N particles with a nonlinear interaction. The special case of quadratic and quartic interaction potential among nearest neighbours is the β-Fermi-Pasta-Ulam model. We enumerate and classify the simple periodic orbits for this system and find the stability zones, employing Floquet theory. Such stability analysis is crucial to understand the transition of FPU lattice from recurrences to globally chaotic behavior, energy transport in lower dimensional system, dynamics of optical lattices and also its impact on shape parameter of bio-polymers such as DNA and RNA.

  12. A simple method of image analysis to estimate CAM vascularization by APERIO ImageScope software.

    PubMed

    Marinaccio, Christian; Ribatti, Domenico

    2015-01-01

    The chick chorioallantoic membrane (CAM) assay is a well-established method to test the angiogenic stimulation or inhibition induced by molecules and cells administered onto the CAM. The quantification of blood vessels in the CAM assay relies on a semi-manual image analysis approach which can be time consuming when considering large experimental groups. Therefore we present here a simple and fast volumetric method to inspect differences in vascularization between experimental conditions related to the stimulation and inhibition of CAM angiogenesis based on the Positive Pixel Count algorithm embedded in the APERIO ImageScope software.

  13. Simple Electrolyzer Model Development for High-Temperature Electrolysis System Analysis Using Solid Oxide Electrolysis Cell

    SciTech Connect

    JaeHwa Koh; DuckJoo Yoon; Chang H. Oh

    2010-07-01

    An electrolyzer model for the analysis of a hydrogen-production system using a solid oxide electrolysis cell (SOEC) has been developed, and the effects for principal parameters have been estimated by sensitivity studies based on the developed model. The main parameters considered are current density, area specific resistance, temperature, pressure, and molar fraction and flow rates in the inlet and outlet. Finally, a simple model for a high-temperature hydrogen-production system using the solid oxide electrolysis cell integrated with very high temperature reactors is estimated.

  14. Analysis of utility-theoretic heuristics for intelligent adaptive network routing

    SciTech Connect

    Mikler, A.R.; Honavar, V.; Wong, J.S.K.

    1996-12-31

    Utility theory offers an elegant and powerful theoretical framework for design and analysis of autonomous adaptive communication networks. Routing of messages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. In this paper, we incrementally develop a set of heuristic decision functions that can be used to guide messages along a near-optimal (e.g., minimum delay) path in a large network. We present an analysis of properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research.

  15. Local structure of Se in cancrinite: X-ray absorption fine structure theoretical analysis

    NASA Astrophysics Data System (ADS)

    Soldatov, A. V.; Yalovega, G. E.

    2000-04-01

    A theoretical "ab initio" analysis of the polarized X-ray absorption spectrum of selenium in a cancrinite matrix based on a full multiple-scattering theory has been performed. Comparison of the theoretical spectra with the experimental results shows that Se atoms form dimerized chains in the channels of the cancrinite matrix with an interchain distance of about 4.8 Å. In addition the distribution of unoccupied partial s-, p- and d- electronic states of Se has been obtained. Density of states analysis provides some insight into the chemical bonding of Se in cancrinite. The results suggest that the interaction of Se atoms with the matrix is the cause of the unusually large Se-Se distance in dimers.

  16. Economic Analysis in the Pacific Northwest Land Resources Project: Theoretical Considerations and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Morse, D. R. A.; Sahlberg, J. T.

    1977-01-01

    The Pacific Northwest Land Resources Inventory Demonstration Project i s an a ttempt to combine a whole spectrum of heterogeneous geographic, institutional and applications elements in a synergistic approach to the evaluation of remote sensing techniques. This diversity is the prime motivating factor behind a theoretical investigation of alternative economic analysis procedures. For a multitude of reasons--simplicity, ease of understanding, financial constraints and credibility, among others--cost-effectiveness emerges as the most practical tool for conducting such evaluation determinatIons in the Pacific Northwest. Preliminary findings in two water resource application areas suggest, in conformity with most published studies, that Lands at-aided data collection methods enjoy substantial cost advantages over alternative techniques. The pntential for sensitivity analysis based on cost/accuracy tradeoffs is considered on a theoretical plane in the absence of current accuracy figures concerning the Landsat-aided approach.

  17. Nanoscale deflection detection of a cantilever-based biosensor using MOSFET structure: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Paryavi, Mohsen; Montazeri, Abbas; Tekieh, Tahereh; Sasanpour, Pezhman

    2016-10-01

    A novel method for detection of biological species based on measurement of cantilever deflection has been proposed and numerically evaluated. Employing the cantilever as a moving gate of a MOSFET structure, its deflection can be analyzed via current characterization of the MOSFET consequently. Locating the cantilever as a suspended gate of a MOSFET on a substrate, the distance between cantilever and oxide layer will change the carrier concentration. Accordingly, it will be resulted in different current voltage characteristics of the device which can be easily measured using simple apparatuses. In order to verify the proposed method, the performance of system has been theoretically analyzed using COMSOL platform. The simulation results have confirmed the performance and sensitivity of the proposed method.

  18. Theoretical analysis of mode instability in high-power fiber amplifiers.

    PubMed

    Hansen, Kristian Rymann; Alkeskjold, Thomas Tanggaard; Broeng, Jes; Lægsgaard, Jesper

    2013-01-28

    We present a simple theoretical model of transverse mode instability in high-power rare-earth doped fiber amplifiers. The model shows that efficient power transfer between the fundamental and higher-order modes of the fiber can be induced by a nonlinear interaction mediated through the thermo-optic effect, leading to transverse mode instability. The temporal and spectral characteristics of the instability dynamics are investigated, and it is shown that the instability can be seeded by both quantum noise and signal intensity noise, while pure phase noise of the signal does not induce instability. It is also shown that the presence of a small harmonic amplitude modulation of the signal can lead to generation of higher harmonics in the output intensity when operating near the instability threshold.

  19. Aerodynamic design and analysis system for supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1975-01-01

    An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.

  20. Theoretical Analysis of the Longitudinal Behavior of an Automatically Controlled Supersonic Interceptor During the Attack Phase

    NASA Technical Reports Server (NTRS)

    Gates, Ordway B., Jr.; Woodling, C. H.

    1959-01-01

    Theoretical analysis of the longitudinal behavior of an automatically controlled supersonic interceptor during the attack phase against a nonmaneuvering target is presented. Control of the interceptor's flight path is obtained by use of a pitch rate command system. Topics lift, and pitching moment, effects of initial tracking errors, discussion of normal acceleration limited, limitations of control surface rate and deflection, and effects of neglecting forward velocity changes of interceptor during attack phase.

  1. Moving Target Detection with Along-Track SAR Interferometry. A Theoretical Analysis

    DTIC Science & Technology

    2002-08-01

    1994). Intensity and Phase Statistics of Multilook Polarimetric and Interfer- ometric SAR Imagery. IEEE Trans. Geoscience and Remote Sensing, GRS-32(5... Multilook Polarimetric Signatures. IEEE Trans. Geoscience and Remote Sensing, GRS-32(3), 562-574. 4. Gierull, C.H. (July 2001). Statistics of SAR ...Along-Track SAR Interferometry A Theoretical Analysis Christoph H. Gierull DISTRIBUTION STATEMENTA Approved for Public Release Distribution Unlimited

  2. A Theoretical Analysis of the Radar Cross Section of the Biconical Corner Reflector.

    DTIC Science & Technology

    1980-05-01

    radar ,and hence the enhancement of the radar cross section is not as great as, say, that of the trihedral corner reflector . In practical...AUSTRALIA TECHNICAL REPORT ERL-0134-TR A THEORETICAL ANALYSIS OF THE RADAR CROSS SECTION OF THE BICONICAL CORNER REFLECTOR J.L. WHIT ROW ~~T!: S fl-PO.AT...biconical corner reflector is a useful device where moderate enhancement of the radar cross section

  3. Can Computer-Mediated Interventions Change Theoretical Mediators of Safer Sex? A Meta-Analysis

    ERIC Educational Resources Information Center

    Noar, Seth M.; Pierce, Larson B.; Black, Hulda G.

    2010-01-01

    The purpose of this study was to conduct a meta-analysis of computer-mediated interventions (CMIs) aimed at changing theoretical mediators of safer sex. Meta-analytic aggregation of effect sizes from k = 20 studies indicated that CMIs significantly improved HIV/AIDS knowledge, d = 0.276, p less than 0.001, k = 15, N = 6,625; sexual/condom…

  4. Can Computer-Mediated Interventions Change Theoretical Mediators of Safer Sex? A Meta-Analysis

    ERIC Educational Resources Information Center

    Noar, Seth M.; Pierce, Larson B.; Black, Hulda G.

    2010-01-01

    The purpose of this study was to conduct a meta-analysis of computer-mediated interventions (CMIs) aimed at changing theoretical mediators of safer sex. Meta-analytic aggregation of effect sizes from k = 20 studies indicated that CMIs significantly improved HIV/AIDS knowledge, d = 0.276, p less than 0.001, k = 15, N = 6,625; sexual/condom…

  5. Theoretical Basis for CTABS80: A Computer Program for Three-Dimensional Analysis of Building Systems.

    DTIC Science & Technology

    1981-09-01

    AD-AlI7 635 COMPUTERS/STRUCTURES INTERNATIONAL OAKLAND CA F/B 13/13 THEORETICAL BASIS FOR CTABSSO: A COMPUTER PROGRAM FOR THREE-DIM--ETC(U) SEP 81 E...THREE-DIMENSIONAL ANALYSIS OF BUILDING SYSTEMS by Edward L. Wilson, H-. H. Dovey Ashraf l-abibullah 3 Computers /Structures International 4009 Webster... International , Oakland, Calif. His work was sponsored with funds provided to the Automatic Data Processing (ADP) Center, U. S. Army Engineer Waterways

  6. Exploiting green analytical procedures for acidity and iron assays employing flow analysis with simple natural reagent extracts.

    PubMed

    Grudpan, Kate; Hartwell, Supaporn Kradtap; Wongwilai, Wasin; Grudpan, Supara; Lapanantnoppakhun, Somchai

    2011-06-15

    Green analytical methods employing flow analysis with simple natural reagent extracts have been exploited. Various formats of flow based analysis systems including a single line FIA, a simple lab on chip with webcam camera detector, and a newly developed simple lab on chip system with reflective absorption detection and the simple extracts from some available local plants including butterfly pea flower, orchid flower, and beet root were investigated and shown to be useful as alternative self indicator reagents for acidity assay. Various tea drinks were explored to be used for chromogenic reagents in iron determination. The benefit of a flow based system, which allows standards and samples to go through the analysis process in exactly the same conditions, makes it possible to employ simple natural extracts with minimal or no pretreatment or purification. The combinations of non-synthetic natural reagents with minimal processed extracts and the low volume requirement flow based systems create some unique green chemical analyses.

  7. Simple spot method of image analysis for evaluation of highly marbled beef.

    PubMed

    Irie, M; Kohira, K

    2012-04-01

    The simple method of evaluating highly marbled beef was examined by image analysis. The images of the cross section at the 6 to 7th rib were obtained from 82 carcasses of Wagyu cattle. By using an overall trace method, the surrounding edges of the longissimus thoracis and three muscles were traced automatically and manually with image analysis. In a spot method, 3 to 5 locations (2.5 or 3.0 cm in diameter) for each muscle were rapidly selected with no manual trace. The images were flattened, binarized, and the ratio of fat area to muscle area was determined. The correlation coefficients for marbling between different muscles, and between the overall trace and the spot methods were 0.55 to 0.81 between different muscles and 0.89 to 0.97, respectively. These results suggested that the simple spot method is speedy and almost as useful as the overall trace method as a measuring technique for beef marbling in loin muscles, especially for highly marbled beef.

  8. From simple to complex oscillatory behaviour: analysis of bursting in a multiply regulated biochemical system.

    PubMed

    Decroly, O; Goldbeter, A

    1987-01-21

    We analyze the transition from simple to complex oscillatory behaviour in a three-variable biochemical system that consists of the coupling in series of two autocatalytic enzyme reactions. Complex periodic behaviour occurs in the form of bursting in which clusters of spikes are separated by phases of relative quiescence. The generation of such temporal patterns is investigated by a series of complementary approaches. The dynamics of the system is first cast into two different time-scales, and one of the variables is taken as a slowly-varying parameter influencing the behaviour of the two remaining variables. This analysis shows how complex oscillations develop from simple periodic behaviour and accounts for the existence of various modes of bursting as well as for the dependence of the number of spikes per period on key parameters of the model. We further reduce the number of variables by analyzing bursting by means of one-dimensional return maps obtained from the time evolution of the three-dimensional system. The analysis of a related piecewise linear map allows for a detailed understanding of the complex sequence leading from a bursting pattern with p spikes to a pattern with p + 1 spikes per period. We show that this transition possesses properties of self-similarity associated with the occurrence of more and more complex patterns of bursting. In addition to bursting, period-doubling bifurcations leading to chaos are observed, as in the differential system, when the piecewise-linear map becomes nonlinear.

  9. Analysis and Simple Circuit Design of Double Differential EMG Active Electrode.

    PubMed

    Guerrero, Federico Nicolás; Spinelli, Enrique Mario; Haberman, Marcelo Alejandro

    2016-06-01

    In this paper we present an analysis of the voltage amplifier needed for double differential (DD) sEMG measurements and a novel, very simple circuit for implementing DD active electrodes. The three-input amplifier that standalone DD active electrodes require is inherently different from a differential amplifier, and general knowledge about its design is scarce in the literature. First, the figures of merit of the amplifier are defined through a decomposition of its input signal into three orthogonal modes. This analysis reveals a mode containing EMG crosstalk components that the DD electrode should reject. Then, the effect of finite input impedance is analyzed. Because there are three terminals, minimum bounds for interference rejection ratios due to electrode and input impedance unbalances with two degrees of freedom are obtained. Finally, a novel circuit design is presented, including only a quadruple operational amplifier and a few passive components. This design is nearly as simple as the branched electrode and much simpler than the three instrumentation amplifier design, while providing robust EMG crosstalk rejection and better input impedance using unity gain buffers for each electrode input. The interference rejection limits of this input stage are analyzed. An easily replicable implementation of the proposed circuit is described, together with a parameter design guideline to adjust it to specific needs. The electrode is compared with the established alternatives, and sample sEMG signals are obtained, acquired on different body locations with dry contacts, successfully rejecting interference sources.

  10. Analysis and Simple Circuit Design of Double Differential EMG Active Electrode.

    PubMed

    Guerrero, Federico Nicolas; Spinelli, Enrique Mario; Haberman, Marcelo Alejandro

    2015-12-22

    In this paper we present an analysis of the voltage amplifier needed for double differential (DD) sEMG measurements and a novel, very simple circuit for implementing DD active electrodes. The three-input amplifier that standalone DD active electrodes require is inherently different from a differential amplifier, and general knowledge about its design is scarce in the literature. First, the figures of merit of the amplifier are defined through a decomposition of its input signal into three orthogonal modes. This analysis reveals a mode containing EMG crosstalk components that the DD electrode should reject. Then, the effect of finite input impedance is analyzed. Because there are three terminals, minimum bounds for interference rejection ratios due to electrode and input impedance unbalances with two degrees of freedom are obtained. Finally, a novel circuit design is presented, including only a quadruple operational amplifier and a few passive components. This design is nearly as simple as the branched electrode and much simpler than the three instrumentation amplifier design, while providing robust EMG crosstalk rejection and better input impedance using unity gain buffers for each electrode input. The interference rejection limits of this input stage are analyzed. An easily replicable implementation of the proposed circuit is described, together with a parameter design guideline to adjust it to specific needs. The electrode is compared with the established alternatives, and sample sEMG signals are obtained, acquired on different body locations with dry contacts, successfully rejecting interference sources.

  11. A Theoretical Analysis of the Influence of Electroosmosis on the Effective Ionic Mobility in Capillary Zone Electrophoresis

    ERIC Educational Resources Information Center

    Hijnen, Hens

    2009-01-01

    A theoretical description of the influence of electroosmosis on the effective mobility of simple ions in capillary zone electrophoresis is presented. The mathematical equations derived from the space-charge model contain the pK[subscript a] value and the density of the weak acid surface groups as parameters characterizing the capillary. It is…

  12. A Theoretical Analysis of the Influence of Electroosmosis on the Effective Ionic Mobility in Capillary Zone Electrophoresis

    ERIC Educational Resources Information Center

    Hijnen, Hens

    2009-01-01

    A theoretical description of the influence of electroosmosis on the effective mobility of simple ions in capillary zone electrophoresis is presented. The mathematical equations derived from the space-charge model contain the pK[subscript a] value and the density of the weak acid surface groups as parameters characterizing the capillary. It is…

  13. Spectrum analysis of radar life signal in the three kinds of theoretical models

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Ma, J. F.; Wang, D.

    2017-02-01

    In the single frequency continuous wave radar life detection system, based on the Doppler effect, the theory model of radar life signal is expressed by the real function, and there is a phenomenon that can't be confirmed by the experiment. When the phase generated by the distance between the measured object and the radar measuring head is л of integer times, the main frequency spectrum of life signal (respiration and heartbeat) is not existed in radar life signal. If this phase is л/2 of odd times, the main frequency spectrum of breath and heartbeat frequency is the strongest. In this paper, we use the Doppler effect as the basic theory, using three different mathematical expressions——real function, complex exponential function and Bessel's function expansion form. They are used to establish the theoretical model of radar life signal. Simulation analysis revealed that the Bessel expansion form theoretical model solve the problem of real function form. Compared with the theoretical model of the complex exponential function, the derived spectral line is greatly reduced in the theoretical model of Bessel expansion form, which is more consistent with the actual situation.

  14. A simple element for multilayer beams in NASTRAN thermal stress analysis

    NASA Technical Reports Server (NTRS)

    Chen, W. T.; Wadhwa, S. K.

    1978-01-01

    In the application of NASTRAN, structural members are usually represented by bar elements with multipoint constraint cards to enforce the interface conditions. While this is a very powerful method in principle, it was found that in practice the process for specification of constraints became tedious and error prone, unless the geometry was simple and the number of grid points low. An alternative approach was found within the framework of the NASTRAN program. This approach made use of the idea that a thermal distortion in a multilayer beam may be similar to a homogeneous beam with a thermal gradient across the cross section. The exact mathematical derivation for the equivalent beam, and all the necessary formulae for the equivalent parameters in NASTRAN analysis are presented. Some numerical examples illustrate the simplicity and ease of this approach for finite element analysis.

  15. Post-buckling and Large Amplitude Free Vibration Analysis of Composite Beams: Simple Intuitive Formulation

    NASA Astrophysics Data System (ADS)

    Gunda, Jagadish Babu; Venkateswara Rao, Gundabathula

    2016-04-01

    Post-buckling and large amplitude free vibration analysis of composite beams with axially immovable ends is investigated in the present study using a simple intuitive formulation. Geometric nonlinearity of Von-Karman type is considered in the analysis which accounts for mid-plane stretching action of the beam. Intuitive formulation uses only two parameters: the critical bifurcation point and the axial stretching force developed due to membrane stretching action of the beam. Hinged-hinged, clamped-clamped and clamped-hinged boundary conditions are considered. Numerical accuracy of the proposed analytical closed-form solutions obtained from the intuitive formulation are compared to available finite element solutions for symmetric and asymmetric layup schemes of laminated composite beam which indicates the confidence gained on the present formulation.

  16. A simple apparatus for quick qualitative analysis of CR39 nuclear track detectors

    SciTech Connect

    Gautier, D. C.; Kline, J. L.; Flippo, K. A.; Gaillard, S. A.; Letzring, S. A.; Hegelich, B. M.

    2008-10-15

    Quantifying the ion pits in Columbia Resin 39 (CR39) nuclear track detector from Thomson parabolas is a time consuming and tedious process using conventional microscope based techniques. A simple inventive apparatus for fast screening and qualitative analysis of CR39 detectors has been developed, enabling efficient selection of data for a more detailed analysis. The system consists simply of a green He-Ne laser and a high-resolution digital single-lens reflex camera. The laser illuminates the edge of the CR39 at grazing incidence and couples into the plastic, acting as a light pipe. Subsequently, the laser illuminates all ion tracks on the surface. A high-resolution digital camera is used to photograph the scattered light from the ion tracks, enabling one to quickly determine charge states and energies measured by the Thomson parabola.

  17. A simple apparatus for quick qualitative analysis of CR39 nuclear track detectors.

    PubMed

    Gautier, D C; Kline, J L; Flippo, K A; Gaillard, S A; Letzring, S A; Hegelich, B M

    2008-10-01

    Quantifying the ion pits in Columbia Resin 39 (CR39) nuclear track detector from Thomson parabolas is a time consuming and tedious process using conventional microscope based techniques. A simple inventive apparatus for fast screening and qualitative analysis of CR39 detectors has been developed, enabling efficient selection of data for a more detailed analysis. The system consists simply of a green He-Ne laser and a high-resolution digital single-lens reflex camera. The laser illuminates the edge of the CR39 at grazing incidence and couples into the plastic, acting as a light pipe. Subsequently, the laser illuminates all ion tracks on the surface. A high-resolution digital camera is used to photograph the scattered light from the ion tracks, enabling one to quickly determine charge states and energies measured by the Thomson parabola.

  18. Experimental and theoretical analysis of the performance of Stirling engine with pendulum type displacer

    SciTech Connect

    Isshiki, Seita; Isshiki, Naotsugu; Takanose, Eiichiro; Igawa, Yoshiharu

    1995-12-31

    This paper describes the detailed experimental and theoretical performance of new type Stirling engine with pendulum type displacer (PDSE) which was proposed last year. This kind of engine has a pendulum type displacer suspended by the hinge shaft, and swings right and left in displacer space. The present paper mainly discusses the PDSE-3B which is an atmospheric 30[W] engine heated by fuel and cooled by water. It is clear that power required to provide a pendulum type displacer motion is expressed as a simple equation consisting of viscous flow loss term proportional to the square of rotational speed and dynamic pressure loss term proportional to the cube of rotational speed. It is also clear that theoretical engine power defined as the difference between experimental indicated power and power required to provide pendulum type displacer motion agrees well with the experimental engine power. It is also clear that measured Nusselt number of regenerator`s wire meshes agreed with the equation of previous study. In conclusion, PDSE is considered effective for measuring many aspects of performance of the Stirling engine.

  19. Simple preparation of plant epidermal tissue for laser microdissection and downstream quantitative proteome and carbohydrate analysis.

    PubMed

    Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A

    2015-01-01

    The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape - liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant-microbe interaction with their potential outreach into crop breeding.

  20. Bias in meta-analysis detected by a simple, graphical test.

    PubMed Central

    Egger, M.; Davey Smith, G.; Schneider, M.; Minder, C.

    1997-01-01

    OBJECTIVE: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. DESIGN: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews. MAIN OUTCOME MEASURE: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. RESULTS: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. CONCLUSIONS: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. PMID:9310563

  1. Direct power comparisons between simple LOD scores and NPL scores for linkage analysis in complex diseases.

    PubMed

    Abreu, P C; Greenberg, D A; Hodge, S E

    1999-09-01

    Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.

  2. The Probabilistic Analysis of Language Acquisition: Theoretical, Computational, and Experimental Analysis

    ERIC Educational Resources Information Center

    Hsu, Anne S.; Chater, Nick; Vitanyi, Paul M. B.

    2011-01-01

    There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact "generative model"…

  3. Blade loss transient dynamics analysis, volume 1. Task 2: TETRA 2 theoretical development

    NASA Technical Reports Server (NTRS)

    Gallardo, Vincente C.; Black, Gerald

    1986-01-01

    The theoretical development of the forced steady state analysis of the structural dynamic response of a turbine engine having nonlinear connecting elements is discussed. Based on modal synthesis, and the principle of harmonic balance, the governing relations are the compatibility of displacements at the nonlinear connecting elements. There are four displacement compatibility equations at each nonlinear connection, which are solved by iteration for the principle harmonic of the excitation frequency. The resulting computer program, TETRA 2, combines the original TETRA transient analysis (with flexible bladed disk) with the steady state capability. A more versatile nonlinear rub or bearing element which contains a hardening (or softening) spring, with or without deadband, is also incorporated.

  4. SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.

    PubMed

    Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B

    2016-02-04

    Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with

  5. [The properties of simple medicines according to Avicenna (980-1037): analysis of some sections of the Canon].

    PubMed

    Ayari-Lassueur, Sylvie

    2012-01-01

    Avicenna spoke on pharmacology in several works, and this article considers his discussions in the Canon, a vast synthesis of the greco-arabian medicine of his time. More precisely, it focuses on book II, which treats simple medicines. This text makes evident that the Persian physician's central preoccupation was the efficacy of the treatment, since it concentrates on the properties of medicines. In this context, the article examines their different classifications and related topics, such as the notion of temperament, central to Avicenna's thought, and the concrete effects medicines have on the body. Yet, these theoretical notions only have sense in practical application. For Avicenna, medicine is both a theoretical and a practical science. For this reason, the second book of the Canon ends with an imposing pharmacopoeia, where the properties described theoretically at the beginning of the book appear in the list of simple medicines, so that the physician can select them according to the intended treatment's goals. The article analyzes a plant from this pharmacopoeia as an example of this practical application, making evident the logic Avicenna uses in detailing the different properties of each simple medicine.

  6. Creativity in theoretical physics: A situational analysis of the fifth Solvay Council on Physics, 1927

    NASA Astrophysics Data System (ADS)

    First, Leili K.

    This dissertation investigates the intersections and interactions of factors which enhance and inhibit creativity in theoretical physics research, using a situational analysis of the fifth Solvay Council on Physics of 1927 (Solvay 1927), a pivotal point in the history of quantum physics. Situational analysis is a postmodern variant of the grounded theory method which views a situation as the unit of analysis and adds situational mapping as an analytic tool. This method specifically works against normalizing or simplifying the points of view, instead drawing out diversity, complexity, and contradiction. It results in "theorizing" rather than theory. This research differs from other analyses of the development of quantum mechanics in looking at technical issues as well as individual, collective, and societal factors. Data examined in this historical analysis includes theoretical papers, conference proceedings, personal letters, and commentary and analysis, both contemporaneous and modern. Literature related to scientific creativity was also consulted. Mapping the situation as a master discourse of Niels Bohr overlapping and interacting with co-existent major discourses on matrix mechanics/Copenhagen interpretation, wave mechanics, and the pilot-wave theory resulted in the most descriptive illustration of the factors influencing scientific creativity before and after Solvay 1927. The master discourse strongly influenced the major discourses and generated the "Copenhagen spirit" which effectively marginalized discourses other than matrix mechanics/Copenhagen interpretation after Solvay 1927.

  7. Security Analysis of Smart Grid Cyber Physical Infrastructures Using Modeling and Game Theoretic Simulation

    SciTech Connect

    Abercrombie, Robert K; Sheldon, Frederick T.

    2015-01-01

    Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.

  8. Elastic Cherenkov effects in transversely isotropic soft materials-I: Theoretical analysis, simulations and inverse method

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping

    2016-11-01

    A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.

  9. An Experimental-Theoretical Analysis of Protein Adsorption on Peptidomimetic Polymer Brushes

    PubMed Central

    Lau, K.H. Aaron; Ren, Chunlai; Park, Sung Hyun; Szleifer, Igal; Messersmith, Phillip B.

    2012-01-01

    Surface-grafted water soluble polymer brushes are being intensely investigated for preventing protein adsorption to improve biomedical device function, prevent marine fouling, and enable applications in biosensing and tissue engineering. In this contribution, we present an experimental-theoretical analysis of a peptidomimetic polymer brush system with regard to the critical brush density required for preventing protein adsorption at varying chain lengths. A mussel adhesive-inspired DOPA-Lys pentapeptide surface grafting motif enabled aqueous deposition of our peptidomimetic polypeptoid brushes over a wide range of chain densities. Critical densities of 0.88 nm−2 for a relatively short polypeptoid 10-mer to 0.42 nm−2 for a 50-mer were identified from measurements of protein adsorption. The experiments were also compared with the protein adsorption isotherms predicted by a molecular theory. Excellent agreements in terms of both the polymer brush structure and the critical chain density were obtained. Furthermore, atomic force microscopy (AFM) imaging is shown to be useful in verifying the critical brush density for preventing protein adsorption. The present co-analysis of experimental and theoretical results demonstrates the significance of characterizing the critical brush density in evaluating the performance of an anti-fouling polymer brush system. The high fidelity of the agreement between the experiments and molecular theory also indicate that the theoretical approach presented can aid in the practical design of antifouling polymer brush systems. PMID:22107438

  10. Discussion on information theoretic and simulation analysis of linear shift-invariant edge detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo

    2013-02-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. We proposed an end-to-end information theory based system to assess linear shift-invariant edge detection algorithms, where the different parts, such as scene, image gathering, and processing, are assessed in an integrated manner using Shannon's information theory. We evaluated the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. To validate our information theoretical conclusions, a series of experiments simulated the whole image acquisition process are conducted. After comparison and discussion between theoretic analysis and simulation analysis, we can draw a conclusion that the proposed information-theoretic assessment provides a new tool which allows us to compare different linear shift-invariant edge detectors in a common environment.

  11. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  12. Theoretical and experimental analysis of the lubricating system of a high speed multiplier

    NASA Astrophysics Data System (ADS)

    Marian, V. G.; Mirică, R. F.; Prisecaru, T.

    2017-02-01

    Flywheel-based energy storage systems are used for energy storage in form of kinetic energy using a flywheel rotating at high speed. In order to achieve this high rotating speed a high speed multiplier could be used in order to increase the rotation speed of a conventional motor. This article presents a theoretical and experimental analysis of the lubricating system of a high speed multiplier used in a flywheel-based energy storage system. The necessary oil flow is theoretically computed using analytical formulas. The oil is used for lubricating the gears, the roller bearings and the sliding bearings. An experimental test rig is used to measure the oil flow. Finally the two results are compared.

  13. Experimental and theoretical oscillator strengths of Mg i for accurate abundance analysis

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Hartman, H.; Nilsson, H.; Jönsson, P.

    2017-02-01

    Context. With the aid of stellar abundance analysis, it is possible to study the galactic formation and evolution. Magnesium is an important element to trace the α-element evolution in our Galaxy. For chemical abundance analysis, such as magnesium abundance, accurate and complete atomic data are essential. Inaccurate atomic data lead to uncertain abundances and prevent discrimination between different evolution models. Aims: We study the spectrum of neutral magnesium from laboratory measurements and theoretical calculations. Our aim is to improve the oscillator strengths (f-values) of Mg i lines and to create a complete set of accurate atomic data, particularly for the near-IR region. Methods: We derived oscillator strengths by combining the experimental branching fractions with radiative lifetimes reported in the literature and computed in this work. A hollow cathode discharge lamp was used to produce free atoms in the plasma and a Fourier transform spectrometer recorded the intensity-calibrated high-resolution spectra. In addition, we performed theoretical calculations using the multiconfiguration Hartree-Fock program ATSP2K. Results: This project provides a set of experimental and theoretical oscillator strengths. We derived 34 experimental oscillator strengths. Except from the Mg i optical triplet lines (3p 3P°0,1,2-4s 3S1), these oscillator strengths are measured for the first time. The theoretical oscillator strengths are in very good agreement with the experimental data and complement the missing transitions of the experimental data up to n = 7 from even and odd parity terms. We present an evaluated set of oscillator strengths, gf, with uncertainties as small as 5%. The new values of the Mg i optical triplet line (3p 3P°0,1,2-4s 3S1) oscillator strength values are 0.08 dex larger than the previous measurements.

  14. A simple and compact fluorescence detection system for capillary electrophoresis and its application to food analysis.

    PubMed

    Zhai, Haiyun; Yuan, Kaisong; Yu, Xiao; Chen, Zuanguang; Liu, Zhenping; Su, Zihao

    2015-10-01

    A novel fluorescence detection system for CE was described and evaluated. Two miniature laser pointers were used as the excitation source. A Y-style optical fiber was used to transmit the excitation light and a four-branch optical fiber was used to collect the fluorescence. The optical fiber and optical filter were imported into a photomultiplier tube without any extra fixing device. A simplified PDMS detection cell was designed with guide channels through which the optical fibers were easily aligned to the detection window of separation capillary. According to different requirements, laser pointers and different filters were selected by simple switching and replacement. The fluorescence from four different directions was collected at the same detecting point. Thus, the sensitivity was enhanced without peak broadening. The fluorescence detection system was simple, compact, low-cost, and highly sensitive, with its functionality demonstrated by the separation and determination of red dyes and fluorescent whitening agents. The detection limit of rhodamine 6G was 7.7 nM (S/N = 3). The system was further applied to determine illegal food dyes. The CE system is potentially eligible for food safety analysis. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Computer-assisted immunohistochemical analysis of cervical cancer biomarkers using low-cost and simple software.

    PubMed

    Hammes, Luciano Serpa; Korte, Jeffrey E; Tekmal, Rajeshwar Rao; Naud, Paulo; Edelweiss, Maria Isabel; Valente, Philip T; Longatto-Filho, Adhemar; Kirma, Nameer; Cunha-Filho, João Sabino

    2007-12-01

    The study of biomarkers by immunohistochemistry (IHC) for cervical cancer and intraepithelial lesions is a promising field. However, manual interpretation of IHC and reproducibility of the scoring systems can be highly subjective. In this article, we present a novel and simple computer-assisted IHC interpretation method based on cyan-magenta-yellow-black (CMYK) color format, for tissues with diaminobenzidine cytoplasmatic staining counterstained with methyl green. This novel method is more easily interpreted than previous computer-assisted methods based on red-green-blue (RGB) color format and presents a strong correlation with the manual H-score. It is simple, objective, and requires only low-cost software and minimal computer skills. Briefly, a total of 67 microscopic images of cervical carcinoma, normal cervix, and negative controls were analyzed in Corel Photo Paint X3 software in CMYK and RGB color format, and compared with manual H-score IHC assessments. The clearest and best positive correlation with the H-score was obtained using the image in CMYK color format and crude values of magenta color (Spearman correlation coefficient=0.84; agreement of 93.33%, P<0.001). To obtain this value, only 3 steps were necessary: convert the image to CMYK format, select the area of interest for analysis, and open the color histogram tool to visualize the magenta value.

  16. A Simple, Approximate Method for Analysis of Kerr-Newman Black Hole Dynamics and Thermodynamics

    NASA Astrophysics Data System (ADS)

    Pankovic, V.; Ciganovic, S.; Glavatovic, R.

    2009-06-01

    In this work we present a simple approximate method for analysis of the basic dynamical and thermodynamical characteristics of Kerr-Newman black hole. Instead of the complete dynamics of the black hole self-interaction, we consider only the stable (stationary) dynamical situations determined by condition that the black hole (outer) horizon "circumference" holds the integer number of the reduced Compton wave lengths corresponding to mass spectrum of a small quantum system (representing the quantum of the black hole self-interaction). Then, we show that Kerr-Newman black hole entropy represents simply the ratio of the sum of static part and rotation part of the mass of black hole on one hand, and the ground mass of small quantum system on the other hand. Also we show that Kerr-Newman black hole temperature represents the negative value of the classical potential energy of gravitational interaction between a part of black hole with reduced mass and a small quantum system in the ground mass quantum state. Finally, we suggest a bosonic great canonical distribution of the statistical ensemble of given small quantum systems in the thermodynamical equilibrium with (macroscopic) black hole as thermal reservoir. We suggest that, practically, only the ground mass quantum state is significantly degenerate while all the other, excited mass quantum states, are non-degenerate. Kerr-Newman black hole entropy is practically equivalent to the ground mass quantum state degeneration. Given statistical distribution admits a rough (qualitative) but simple modeling of Hawking radiation of the black hole too.

  17. Analysis of the electrochemical behaviour of polymer electrolyte fuel cells using simple impedance models

    NASA Astrophysics Data System (ADS)

    Danzer, Michael A.; Hofer, Eberhard P.

    The analysis of the electrochemical behaviour of polymer electrolyte fuel cells (PEFC) both in time and frequency domain requires appropriate impedance models. Simple impedance models with lumped parameters as resistances and capacitances or Warburg impedances do have limitations: often the validity is limited to a certain frequency range, effects at very low or very high frequencies can not be described properly. However, these models have their usefulness for engineering applications, e.g. to distinguish the major loss terms, to estimate the membrane resistance, and to observe the changes of internal losses of fuel cells over time without the need for additional sensors. The work discusses different impedance configurations and their applicability to impedance spectra of a fuel cell stack. Impedance spectra at points along the DC polarization curve, as well as spectra at various operating conditions are analysed and identified by a complex nonlinear least squares method. Finally, the connection of the impedance data with and the assignment of the parameters to physical phenomena are discussed. The examination shows that simple impedance models are well qualified to describe the electrochemical behaviour over a wide frequency range at all operating conditions.

  18. A simple method for the analysis of neutron resonance capture spectra

    SciTech Connect

    Clarijs, Martijn C.; Bom, Victor R.; Eijk, Carel W. E. van

    2009-03-15

    Neutron resonance capture analysis (NRCA) is a method used to determine the bulk composition of various kinds of objects and materials. It is based on analyzing direct capture resonance peaks. However, the analysis is complicated by scattering followed by capture effects in the object itself. These effects depend on the object's shape and size. In this paper the new Delft elemental analysis program (DEAP) is presented which can automatically and quickly analyze multiple NRCA spectra in a practical and simple way, yielding the elemental bulk composition of an object, largely independent of its shape and size. The DEAP method is demonstrated with data obtained with a Roman bronze water tap excavated in Nijmegen (The Netherlands). DEAP will also be used in the framework of the Ancient Charm project as data analysis program for neutron resonance capture imaging (NRCI) experiments. NRCI provides three-dimensional visualization and quantification of the internal structure of archaeological objects by performing scanning measurements with narrowly collimated neutron beams on archaeological objects in computed tomography based experimental setups. The large amounts (hundreds to thousands) of spectra produced during a NRCI experiment can automatically and quickly be analyzed by DEAP.

  19. Quarkonium production at the LHC: A data-driven analysis of remarkably simple experimental patterns

    NASA Astrophysics Data System (ADS)

    Faccioli, Pietro; Lourenço, Carlos; Araújo, Mariana; Knünz, Valentin; Krätschmer, Ilse; Seixas, João

    2017-10-01

    The LHC quarkonium production data reveal a startling observation: the J / ψ, ψ (2S), χc1, χc2 and ϒ (nS)pT-differential cross sections in the central rapidity region are compatible with one universal momentum scaling pattern. Considering also the absence of strong polarizations of directly and indirectly produced S-wave mesons, we conclude that there is currently no evidence of a dependence of the partonic production mechanisms on the quantum numbers and mass of the final state. The experimental observations supporting this universal production scenario are remarkably significant, as shown by a new analysis approach, unbiased by specific theoretical calculations of partonic cross sections, which are only considered a posteriori, in comparisons with the data-driven results.

  20. A simple method to 'point count' silt using scanning electron microscopy aided by image analysis.

    PubMed

    Ware, C I

    2003-11-01

    This study presents a simple method to 'point count' silt-sized grains using backscattered scanning electron microscopy together with image analysis. The work materialized out of the need to determine the heavy mineral abundance within silt obtained from coastal dunes to aid in the interpretation of dune weathering. This technique allows two broad mineral groups to be quantified according to their modal abundance. The groups are characterized by their dominant atomic elements present; atomic numbers >20 are classified as 'high' (metal oxides, zircon, monazite, carbonates, pyroxenes and amphiboles) and those <20 as 'low' (quartz, feldspars and organics). As a check on this technique, X-ray fluorescence was used. This showed a strong positive correlation (r2=0.85) with the developed point counting technique.

  1. Entropy analysis reveals a simple linear relation between laser speckle and blood flow.

    PubMed

    Miao, Peng; Chao, Zhen; Zhang, Yiguang; Li, Nan; Thakor, Nitish V

    2014-07-01

    Dynamic laser speckles contain motion information of scattering particles which can be estimated by laser speckle contrast analysis (LASCA). In this work, an entropy-based method was proposed to provide a more robust estimation of motion speed. An in vitro flow simulation experiment confirmed a simple linear relation between entropy, exposure time, and speed. A multimodality optical imaging setup is developed to validate the advantages of the entropy method based on laser speckle imaging, green light imaging, and fluorescence imaging. The entropy method overcomes traditional LASCA with less noisy interference, and extracts more visible and detailed vasculatures in vivo. Furthermore, the entropy method provides a more accurate estimation and a stable pattern of blood flow activations in the rat's somatosensory area under multitrial hand paw stimulations.

  2. PyKE: Reduction and analysis of Kepler Simple Aperture Photometry data

    NASA Astrophysics Data System (ADS)

    Still, Martin; Barclay, Tom

    2012-08-01

    PyKE is a python-based PyRAF package that can also be run as a stand-alone program within a unix-based shell without compiling against PyRAF. It is a group of tasks developed for the reduction and analysis of Kepler Simple Aperture Photometry (SAP) data of individual targets with individual characteristics. The main purposes of these tasks are to i) re-extract light curves from manually-chosen pixel apertures and ii) cotrend and/or detrend the data in order to reduce or remove systematic noise structure using methods tunable to user and target-specific requirements. PyKE is an open source project and contributions of new tasks or enhanced functionality of existing tasks by the community are welcome.

  3. Poly: a quantitative analysis tool for simple sequence repeat (SSR) tracts in DNA

    PubMed Central

    Bizzaro, Jeff W; Marx, Kenneth A

    2003-01-01

    Background Simple sequence repeats (SSRs), microsatellites or polymeric sequences are common in DNA and are important biologically. From mononucleotide to trinucleotide repeats and beyond, they can be found in long (> 6 repeating units) tracts and may be characterized by quantifying the frequencies in which they are found and their tract lengths. However, most of the existing computer programs that find SSR tracts do not include these methods. Results A computer program named Poly has been written not only to find SSR tracts but to analyze the results quantitatively. Conclusions Poly is significant in its use of non-standard, quantitative methods of analysis. And, with its flexible object model and data structure, Poly and its generated data can be used for even more sophisticated analyses. PMID:12791171

  4. In-silico analysis of simple and imperfect microsatellites in diverse tobamovirus genomes.

    PubMed

    Alam, Chaudhary Mashhood; Singh, Avadhesh Kumar; Sharfuddin, Choudhary; Ali, Safdar

    2013-11-10

    An in-silico analysis of simple sequence repeats (SSRs) in 30 species of tobamoviruses was done. SSRs (mono to hexa) were present with variant frequency across species. Compound microsatellites, primarily of variant motifs accounted for up to 11.43% of the SSRs. Motif duplications were observed for A, T, AT, and ACA repeats. (AG)-(TC) was the most prevalent SSR-couple. SSRs were differentially localized in the coding region with ~54% on the 128 kDa protein while 20.37% was exclusive to 186 kDa protein. Characterization of such variations is important for elucidating the origin, sequence variations, and structure of these widely used, but incompletely understood sequences.

  5. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  6. A simple, low-cost staining method for rapid-throughput analysis of tumor spheroids.

    PubMed

    Eckerdt, Frank; Alvarez, Angel; Bell, Jonathan; Arvanitis, Constadina; Iqbal, Asneha; Arslan, Ahmet D; Hu, Bo; Cheng, Shi-Yuan; Goldman, Stewart; Platanias, Leonidas C

    2016-01-01

    Tumor spheroids are becoming an important tool for the investigation of cancer stem cell (CSC) function in tumors; thus, low-cost and high-throughput methods for drug screening of tumor spheroids are needed. Using neurospheres as non-adherent three-dimensional (3-D) cultures, we developed a simple, low-cost acridine orange (AO)-based method that allows for rapid analysis of live neurospheres by fluorescence microscopy in a 96-well format. This assay measures the cross-section area of a spheroid, which corresponds to cell viability. Our novel method allows rapid screening of a panel of anti-proliferative drugs to assess inhibitory effects on the growth of cancer stem cells in 3-D cultures.

  7. Analysis of simple 2-D and 3-D metal structures subjected to fragment impact

    NASA Technical Reports Server (NTRS)

    Witmer, E. A.; Stagliano, T. R.; Spilker, R. L.; Rodal, J. J. A.

    1977-01-01

    Theoretical methods were developed for predicting the large-deflection elastic-plastic transient structural responses of metal containment or deflector (C/D) structures to cope with rotor burst fragment impact attack. For two-dimensional C/D structures both, finite element and finite difference analysis methods were employed to analyze structural response produced by either prescribed transient loads or fragment impact. For the latter category, two time-wise step-by-step analysis procedures were devised to predict the structural responses resulting from a succession of fragment impacts: the collision force method (CFM) which utilizes an approximate prediction of the force applied to the attacked structure during fragment impact, and the collision imparted velocity method (CIVM) in which the impact-induced velocity increment acquired by a region of the impacted structure near the impact point is computed. The merits and limitations of these approaches are discussed. For the analysis of 3-d responses of C/D structures, only the CIVM approach was investigated.

  8. Analysis and Modeling of the Arctic Oscillation Using a Simple Barotropic Model with Baroclinic Eddy Forcing.

    NASA Astrophysics Data System (ADS)

    Tanaka, H. L.

    2003-06-01

    In this study, a numerical simulation of the Arctic Oscillation (AO) is conducted using a simple barotropic model that considers the barotropic-baroclinic interactions as the external forcing. The model is referred to as a barotropic S model since the external forcing is obtained statistically from the long-term historical data, solving an inverse problem. The barotropic S model has been integrated for 51 years under a perpetual January condition and the dominant empirical orthogonal function (EOF) modes in the model have been analyzed. The results are compared with the EOF analysis of the barotropic component of the real atmosphere based on the daily NCEP-NCAR reanalysis for 50 yr from 1950 to 1999.According to the result, the first EOF of the model atmosphere appears to be the AO similar to the observation. The annular structure of the AO and the two centers of action at Pacific and Atlantic are simulated nicely by the barotropic S model. Therefore, the atmospheric low-frequency variabilities have been captured satisfactorily even by the simple barotropic model.The EOF analysis is further conducted to the external forcing of the barotropic S model. The structure of the dominant forcing shows the characteristics of synoptic-scale disturbances of zonal wavenumber 6 along the Pacific storm track. The forcing is induced by the barotropic-baroclinic interactions associated with baroclinic instability.The result suggests that the AO can be understood as the natural variability of the barotropic component of the atmosphere induced by the inherent barotropic dynamics, which is forced by the barotropic-baroclinic interactions. The fluctuating upscale energy cascade from planetary waves and synoptic disturbances to the zonal motion plays the key role for the excitation of the AO.

  9. AppEEARS: Simple and Intuitive Access to Analysis Ready Data

    NASA Astrophysics Data System (ADS)

    Quenzer, R.; Friesz, A. M.

    2015-12-01

    Many search and discovery tools for satellite land remote sensing data archives are often catalog-based and can only be queried at a granule level requiring remote sensing data users to download and process entire data files before science questions can be addressed. Methods for accessing remote sensing data archives must become more precise in order to allow users to concisely extract study relevant information from rapidly expanding archives. To address the need, NASA's Land Processes Distributed Active Archive Center (LP DAAC) is developing AppEEARS (Application for Extracting and Exploring Analysis Ready Samples). Built on top of middleware services, the AppEEARS user interface facilitates input of precise sample locations, such as field study sites or flux towers, to extract analysis-ready data from land MODIS products held by NASA's LP DAAC. AppEEARS provides simple and intuitive access to LP DAAC's land MODIS products. For a given set of sample locations, AppEEARS returns pixel values that intersect with the provided locations through the requested date range. Additionally, the AppEEARS user interface provides exploratory data analysis services (e.g. time series and scatter plots) allowing users to interact and explore the requested data and its associated quality information before downloading. AppEEARS delivers study relevant data sets requiring little more processing allowing users to spend less time performing data preparation routines and more time answer questions.

  10. Activation of attention networks using frequency analysis of a simple auditory-motor paradigm.

    PubMed

    Astrakas, Loukas G; Teicher, Martin; Tzika, A Aria

    2002-04-01

    The purpose of this study was to devise a paradigm that stimulates attention using a frequency-based analysis of the data acquired during a motor task. Six adults (30-40 years of age) and one child (10 years) were studied. Each subject was requested to attend to "start" and "stop" commands every 20 s alternatively and had to respond with the motor task every second time. Attention was stimulated during a block-designed, motor paradigm in which a start-stop commands cycle produced activation at the fourth harmonic of the motor frequency. We disentangled the motor and attention functions using statistical analysis with subspaces spanned by vectors generated by a truncated trigonometric series of motor and attention frequency. During our auditory-motor paradigm, all subjects showed activation in areas that belong to an extensive attention network. Attention and motor functions were coactivated but with different frequencies. While the motor-task-related areas were activated with slower frequency than attention, the activation in the attention-related areas was enhanced every time the subject had to start or end the motor task. We suggest that although a simple block-designed, auditory-motor paradigm stimulates the attention network, motor preparation, and motor inhibition concurrently, a frequency-based analysis can distinguish attention from motor functions. Due to its simplicity the paradigm can be valuable in studying children with attention deficit disorders.

  11. Simple sequence repeat analysis of genetic diversity in primary core collection of peach (Prunus persica).

    PubMed

    Li, Tian-Hong; Li, Yin-Xia; Li, Zi-Chao; Zhang, Hong-Liang; Qi, Yong-Wen; Wang, Tao

    2008-01-01

    In this study, the genetic diversity of 51 cultivars in the primary core collection of peach (Prunus persica (L.) Batsch) was evaluated by using simple sequence repeats (SSRs). The phylogenetic relationships and the evolutionary history among different cultivars were determined on the basis of SSR data. Twenty-two polymorphic SSR primer pairs were selected, and a total of 111 alleles were identified in the 51 cultivars, with an average of 5 alleles per locus. According to traditional Chinese classification of peach cultivars, the 51 cultivars in the peach primary core collection belong to six variety groups. The SSR analysis revealed that the levels of the genetic diversity within each variety group were ranked as Sweet peach > Crisp peach > Flat peach > Nectarine > Honey Peach > Yellow fleshed peach. The genetic diversity among the Chinese cultivars was higher than that among the introduced cultivars. Cluster analysis by the unweighted pair group method with arithmetic averaging (UPGMA) placed the 51 cultivars into five linkage clusters. Cultivar members from the same variety group were distributed in different UPGMA clusters and some members from different variety groups were placed under the same cluster. Different variety groups could not be differentiated in accordance with SSR markers. The SSR analysis revealed rich genetic diversity in the peach primary core collection, representative of genetic resources of peach.

  12. Dream-reality confusion in borderline personality disorder: a theoretical analysis

    PubMed Central

    Skrzypińska, Dagna; Szmigielska, Barbara

    2015-01-01

    This paper presents an analysis of dream-reality confusion (DRC) in relation to the characteristics of borderline personality disorder (BPD), based on research findings and theoretical considerations. It is hypothesized that people with BPD are more likely to experience DRC compared to people in non-clinical population. Several variables related to this hypothesis were identified through a theoretical analysis of the scientific literature. Sleep disturbances: problems with sleep are found in 15–95.5% of people with BPD (Hafizi, 2013), and unstable sleep and wake cycles, which occur in BPD (Fleischer et al., 2012), are linked to DRC. Dissociation: nearly two-thirds of people with BPD experience dissociative symptoms (Korzekwa and Pain, 2009) and dissociative symptoms are correlated with a fantasy proneness; both dissociative symptoms and fantasy proneness are related to DRC (Giesbrecht and Merckelbach, 2006). Negative dream content: People with BPD have nightmares more often than other people (Semiz et al., 2008); dreams that are more likely to be confused with reality tend to be more realistic and unpleasant, and are reflected in waking behavior (Rassin et al., 2001). Cognitive disturbances: Many BPD patients experience various cognitive disturbances, including problems with reality testing (Fiqueierdo, 2006; Mosquera et al., 2011), which can foster DRC. Thin boundaries: People with thin boundaries are more prone to DRC than people with thick boundaries, and people with BPD tend to have thin boundaries (Hartmann, 2011). The theoretical analysis on the basis of these findings suggests that people who suffer from BPD may be more susceptible to confusing dream content with actual waking events. PMID:26441768

  13. Dream-reality confusion in borderline personality disorder: a theoretical analysis.

    PubMed

    Skrzypińska, Dagna; Szmigielska, Barbara

    2015-01-01

    This paper presents an analysis of dream-reality confusion (DRC) in relation to the characteristics of borderline personality disorder (BPD), based on research findings and theoretical considerations. It is hypothesized that people with BPD are more likely to experience DRC compared to people in non-clinical population. Several variables related to this hypothesis were identified through a theoretical analysis of the scientific literature. Sleep disturbances: problems with sleep are found in 15-95.5% of people with BPD (Hafizi, 2013), and unstable sleep and wake cycles, which occur in BPD (Fleischer et al., 2012), are linked to DRC. Dissociation: nearly two-thirds of people with BPD experience dissociative symptoms (Korzekwa and Pain, 2009) and dissociative symptoms are correlated with a fantasy proneness; both dissociative symptoms and fantasy proneness are related to DRC (Giesbrecht and Merckelbach, 2006). Negative dream content: People with BPD have nightmares more often than other people (Semiz et al., 2008); dreams that are more likely to be confused with reality tend to be more realistic and unpleasant, and are reflected in waking behavior (Rassin et al., 2001). Cognitive disturbances: Many BPD patients experience various cognitive disturbances, including problems with reality testing (Fiqueierdo, 2006; Mosquera et al., 2011), which can foster DRC. Thin boundaries: People with thin boundaries are more prone to DRC than people with thick boundaries, and people with BPD tend to have thin boundaries (Hartmann, 2011). The theoretical analysis on the basis of these findings suggests that people who suffer from BPD may be more susceptible to confusing dream content with actual waking events.

  14. Cepstral peak sensitivity: A theoretic analysis and comparison of several implementations

    PubMed Central

    Skowronski, Mark D.; Shrivastav, Rahul; Hunter, Eric J.

    2014-01-01

    Summary Objective The aim of this study was to develop a theoretic analysis of the cepstral peak, to compare several cepstral peak software programs, and to propose methods for reducing variability in cepstral peak estimation. Study Design Descriptive, experimental study. Methods The theoretic cepstral peak value of a pulse train was derived and compared to estimates computed for pulse train WAV files using available cepstral peak software programs: 1) Hillenbrand’s cepstral peak prominence (CPP) software, 2) KayPENTAX Multi-Speech implementation of CPP, and 3) a MATLAB implementation using cepstral interpolation. Cepstral peak variation was also investigated for synthetic breathy vowels. Results For pulse trains with period T samples, the theoretic cepstral peak is 1/2+ε/T, |ε|<0.1 for all pulse trains (ε=0 for integer T). For fundamental frequencies between 70 and 230 Hz, cepstral peak mean ± st. dev. was 0.496±0.002 using cepstral interpolation and 0.29±0.03 using Hillenbrand’s software, whereas CPP was 35.0±3.8 dB using Hillenbrand’s software and 20.5±2.7 dB using KayPENTAX’s software. CP and CPP vs. signal-to-noise ratio for synthetic breathy vowels were fit to a logistic model for the Hillenbrand (R2 = 0.92) and KayPENTAX (R2 = 0.82) estimators as well as an ideal estimator (R2 = 0.98) which used a period-synchronous analysis. Conclusions The findings indicate that several variables unrelated to the signal itself impact cepstral peak values, with some factors introducing large variability in cepstral peak values that would otherwise be attributed to the signal (e.g., voice quality). Variability may be reduced by using a period-synchronous analysis with Hann windows. PMID:25944288

  15. Cepstral Peak Sensitivity: A Theoretic Analysis and Comparison of Several Implementations.

    PubMed

    Skowronski, Mark D; Shrivastav, Rahul; Hunter, Eric J

    2015-11-01

    The aim of this study was to develop a theoretic analysis of the cepstral peak (CP), to compare several CP software programs, and to propose methods for reducing variability in CP estimation. Descriptive, experimental study. The theoretic CP value of a pulse train was derived and compared with estimates computed for pulse train WAV files using available CP software programs: (1) Hillenbrand's CP prominence (CPP) software (Western Michigan University, Kalamazoo, MI), (2) KayPENTAX (Montvale, NJ) Multi-Speech implementation of CPP, and (3) a MATLAB (The Mathworks, Natick, MA, version R2014a) implementation using cepstral interpolation. The CP variation was also investigated for synthetic breathy vowels. For pulse trains with period T samples, the theoretic CP is 1/2+ε/T, |ε|<0.1 for all pulse trains (ε=0 for integer T). For fundamental frequencies between 70 and 230Hz, the CP mean±standard deviation was 0.496±0.002 using cepstral interpolation and 0.29±0.03 using Hillenbrand's software, whereas CPP was 35.0±3.8dB using Hillenbrand's software and 20.5±2.7dB using KayPENTAX's software. The CP and CPP versus signal-to-noise ratio for synthetic breathy vowels were fit to a logistic model for the Hillenbrand (R(2)=0.92) and KayPENTAX (R(2)=0.82) estimators as well as an ideal estimator (R(2)=0.98), which used a period-synchronous analysis. The findings indicate that several variables unrelated to the signal itself impact CP values, with some factors introducing large variability in CP values that would otherwise be attributed to the signal (eg, voice quality). Variability may be reduced by using a period-synchronous analysis with Hann windows. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Exploring the shape deformation of biomembrane tubes with theoretical analysis and computer simulation.

    PubMed

    Liu, Xuejuan; Tian, Falin; Yue, Tongtao; Zhang, Xianren; Zhong, Chongli

    2016-11-09

    The shape deformation of membrane nanotubes is studied by a combination of theoretical analysis and molecular simulation. First we perform free energy analysis to demonstrate the effects of various factors on two ideal states for the pearling transition, and then we carry out dissipative particle dynamics simulations, through which various types of membrane tube deformation are found, including membrane pearling, buckling, and bulging. Different models for inducing tube deformation, including the osmotic pressure, area difference and spontaneous curvature models, are considered to investigate tubular instabilities. Combined with free energy analysis, our simulations show that the origin of the deformation of membrane tubes in different models can be classified into two categories: effective spontaneous curvature and membrane tension. We further demonstrate that for different models, a positive membrane tension is required for the pearling transition. Finally we show that different models can be coupled to effectively deform the membrane tube.

  17. Infrared, Raman and ultraviolet with circular dichroism analysis and theoretical calculations of tedizolid

    NASA Astrophysics Data System (ADS)

    Michalska, Katarzyna; Mizera, Mikołaj; Lewandowska, Kornelia; Cielecka-Piontek, Judyta

    2016-07-01

    Tedizolid is the newest antibacterial agent from the oxazolidinone class. For its identification, FT-IR (2000-400 cm-1) and Raman (2000-400 cm-1) analyses were proposed. Studies of the enantiomeric purity of tedizolid were conducted based on ultraviolet-circular dichroism (UV-CD) analysis. Density functional theory (DFT) with the B3LYP hybrid functional and 6-311G(2df,2pd) basis set was used for support of the analysis of the FT-IR and Raman spectra. Theoretical methods made it possible to conduct HOMO and LUMO analysis, which was used to determine the charge transfer for two tedizolid enantiomers. Molecular electrostatic potential maps were calculated with the DFT method for both tedizolid enantiomers. The relationship between the results of ab initio calculations and knowledge about the chemical-biological properties of R- and S-tedizolid enantiomers is also discussed.

  18. Theoretical analysis and experimental evaluation of a Csl(TI) based electronic portal imaging system.

    PubMed

    Sawant, Amit; Zeman, Herbert; Samant, Sanjiv; Lovhoiden, Gunnar; Weinberg, Brent; DiBianca, Frank

    2002-06-01

    This article discusses the design and analysis of a portal imaging system based on a thick transparent scintillator. A theoretical analysis using Monte Carlo simulation was performed to calculate the x-ray quantum detection efficiency (QDE), signal to noise ratio (SNR) and the zero frequency detective quantum efficiency [DQE(0)] of the system. A prototype electronic portal imaging device (EPID) was built, using a 12.7 mm thick, 20.32 cm diameter, Csl(Tl) scintillator, coupled to a liquid nitrogen cooled CCD TV camera. The system geometry of the prototype EPID was optimized to achieve high spatial resolution. The experimental evaluation of the prototype EPID involved the determination of contrast resolution, depth of focus, light scatter and mirror glare. Images of humanoid and contrast detail phantoms were acquired using the prototype EPID and were compared with those obtained using conventional and high contrast portal film and a commercial EPID. A theoretical analysis was also carried out for a proposed full field of view system using a large area, thinned CCD camera and a 12.7 mm thick CsI(TI) crystal. Results indicate that this proposed design could achieve DQE(0) levels up to 11%, due to its order of magnitude higher QDE compared to phosphor screen-metal plate based EPID designs, as well as significantly higher light collection compared to conventional TV camera based systems.

  19. Feasibility of theoretical formulas on the anisotropy of shale based on laboratory measurement and error analysis

    NASA Astrophysics Data System (ADS)

    Xie, Jianyong; Di, Bangrang; Wei, Jianxin; Luan, Xinyuan; Ding, Pinbo

    2015-04-01

    This paper designs a total angle ultrasonic test method to measure the P-wave velocities (vp), vertically and horizontally polarized shear wave velocities (vsv and vsh) of all angles to the bedding plane on different kinds of strong anisotropic shale. Analysis has been made of the comparisons among the observations and corresponding calculated theoretical curves based on the varied vertical transversely isotropic (TI) medium theories, for which discussing the real similarity with the characterizations of the TI medium on the scope of dynamic behaviors, and further conclude a more accurate and precise theory from the varied theoretical formulas as well as its suitable range to characterize the strong anisotropy of shale. At a low phase angle (theta <10 degrees), the three theoretical curves are consistent with the observations, and then tend to be distinct with the increase of the phase angle, especially for the Thomsen theoretical curves which tend toward serious deviation, while the Berryman expressions provide a relatively much better agreement with the measured data for vp, vsv on shale. Also all of the three theories lead to more deviations in the approximation of the vsv than for the vp and vsh. Furthermore, we created synthetic comparative ideal physical models (from coarse bakelite, cambric bakelite, and paper bakelite) as supplementary models to natural shale, which are used to model shale with different anisotropy, to research the effects of the anisotropic parameters upon the applicability of the former optimal TI theories, especially for the vsv. We found the when the P-wave anisotropy, S-wave anisotropy ε, γ > 0.25, the Berrryman curve will be the best fit for the vp, vsv on shale.

  20. Analysis of poetic literature using B. F. Skinner's theoretical framework from verbal behavior

    PubMed Central

    Luke, Nicole M.

    2003-01-01

    This paper examines Skinner's work on verbal behavior in the context of literature as a particular class of written verbal behavior. It looks at contemporary literary theory and analysis and the contributions that Skinner's theoretical framework can make. Two diverse examples of poetic literature are chosen and analyzed following Skinner's framework, examining the dynamic interplay between the writer and reader that take place within the bounds of the work presented. It is concluded that Skinner's hypotheses about verbal behavior and the functional approach to understanding it have much to offer literary theorists in their efforts to understand literary works and should be more carefully examined.

  1. Theoretical Analysis of Orientation Distribution Function Reconstruction of Textured Polycrystal by Parametric X-rays

    NASA Astrophysics Data System (ADS)

    Lobach, I.; Benediktovitch, A.

    2016-07-01

    The possibility of quantitative texture analysis by means of parametric x-ray radiation (PXR) from relativistic electrons with Lorentz factor γ > 50MeV in a polycrystal is considered theoretically. In the case of rather smooth orientation distribution function (ODF) and large detector (θD >> 1/γ) the universal relation between ODF and intensity distribution is presented. It is shown that if ODF is independent on one from Euler angles, then the texture is fully determined by angular intensity distribution. Application of the method to the simulated data shows the stability of the proposed algorithm.

  2. Information-theoretic analysis of x-ray scatter and phase architectures for anomaly detection

    NASA Astrophysics Data System (ADS)

    Coccarelli, David; Gong, Qian; Stoian, Razvan-Ionut; Greenberg, Joel A.; Gehm, Michael E.; Lin, Yuzhang; Huang, Liang-Chih; Ashok, Amit

    2016-05-01

    Conventional performance analysis of detection systems confounds the effects of the system architecture (sources, detectors, system geometry, etc.) with the effects of the detection algorithm. Previously, we introduced an information-theoretic approach to this problem by formulating a performance metric, based on Cauchy-Schwarz mutual information, that is analogous to the channel capacity concept from communications engineering. In this work, we discuss the application of this metric to study novel screening systems based on x-ray scatter or phase. Our results show how effective use of this metric can impact design decisions for x-ray scatter and phase systems.

  3. Morphology of synthetic chrysoberyl and alexandrite crystals: Analysis of experimental data and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Gromalova, N. A.; Eremin, N. N.; Dorokhova, G. I.; Urusov, V. S.

    2012-07-01

    A morphological analysis of chrysoberyl and alexandrite crystals obtained by flux crystallization has been performed. Seven morphological types of crystals are selected. The surface energies of the faces of chrysoberyl and alexandrite crystals and their isostructural analogs, BeCr2O4 and BeFe2O4, have been calculated by atomistic computer modeling using the Metadise program. A "combined" approach is proposed which takes into account both the structural geometry and the surface energy of the faces and thus provides better agreement between the theoretical and experimentally observed faceting of chrysoberyl and alexandrite crystals.

  4. Estimation of ozone with total ozone portable spectroradiometer instruments. I. Theoretical model and error analysis

    NASA Astrophysics Data System (ADS)

    Flynn, Lawrence E.; Labow, Gordon J.; Beach, Robert A.; Rawlins, Michael A.; Flittner, David E.

    1996-10-01

    Inexpensive devices to measure solar UV irradiance are available to monitor atmospheric ozone, for example, total ozone portable spectroradiometers (TOPS instruments). A procedure to convert these measurements into ozone estimates is examined. For well-characterized filters with 7-nm FWHM bandpasses, the method provides ozone values (from 304- and 310-nm channels) with less than 0.4 error attributable to inversion of the theoretical model. Analysis of sensitivity to model assumptions and parameters yields estimates of 3 bias in total ozone results with dependence on total ozone and path length. Unmodeled effects of atmospheric constituents and instrument components can result in additional 2 errors.

  5. Theoretical analysis for ozone yield of a high frequency silent discharge

    NASA Astrophysics Data System (ADS)

    Facta, Mochammad; Hermawan, Salam, Z.; Buntat, Z.; Smith, I. R.

    2017-03-01

    The paper uses dimensional analysis to develop a theoretical prediction of the yield of a high frequency silent discharge ozone generator at atmospheric pressure. To verify the viability of the resulting yield equation, a rectangular shaped chamber with a 0.75 mm air gap was constructed. Aluminium mesh electrodes were used, with metal tape and a planar mica sheet forming a dielectric barrier. The power supply to the chamber was from a modified class E resonant power inverter. It is established that prediction using the yield equation based on fractional function matches closely with data obtained from the experimental findings.

  6. Theoretical conformational analysis of the bovine adrenal medulla 12 residue peptide molecule

    NASA Astrophysics Data System (ADS)

    Akhmedov, N. A.; Tagiyev, Z. H.; Hasanov, E. M.; Akverdieva, G. A.

    2003-02-01

    The spatial structure and conformational properties of the bovine adrenal medulla 12 residue peptide Tyr1-Gly2-Gly3-Phe4-Met5-Arg6-Arg7-Val8-Gly9-Arg10-Pro11-Glu12 (BAM-12P) molecule were studied by theoretical conformational analysis. It is revealed that this molecule can exist in several stable states. The energy and geometrical parameters for the low-energy conformations are obtained. The conformationally rigid and labile segments of this molecule were revealed.

  7. Adsorption and Coadsorption of CO and NO on the RH(100) Surface. A Theoretical Analysis

    DTIC Science & Technology

    1989-04-14

    AS RPT. QDTIC USERS unclassified 22a NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE nclu1e Area Code) 22c. OFFICE SYMBOL Roald Hoffmann 607-255-3419 00...Coadsorption of CO and 30 on the Rh (100) Surface: A Theoretical Analysis Dragan LJ. Vutkovi,&, Susan A. Jansen and Roald Hoffmann" Department of...Pirug, H.P. Bonzel, H. Hopster and H. Ibach, J. Chem. Phys. 1979, 71, 593. c) A.F. Carley, S. Rassias, M.W. Roberts and T.-H. Wang , Surf. Scl. 1979

  8. Plasmid stability analysis based on a new theoretical model employing stochastic simulations

    PubMed Central

    Werbowy, Olesia; Werbowy, Sławomir

    2017-01-01

    Here, we present a simple theoretical model to study plasmid stability, based on one input parameter which is the copy number of plasmids present in a host cell. The Monte Carlo approach was used to analyze random fluctuations affecting plasmid replication and segregation leading to gradual reduction in the plasmid population within the host cell. This model was employed to investigate maintenance of pEC156 derivatives, a high-copy number ColE1-type Escherichia coli plasmid that carries an EcoVIII restriction-modification system. Plasmid stability was examined in selected Escherichia coli strains (MG1655, wild-type; MG1655 pcnB, and hyper-recombinogenic JC8679 sbcA). We have compared the experimental data concerning plasmid maintenance with the simulations and found that the theoretical stability patterns exhibited an excellent agreement with those empirically tested. In our simulations, we have investigated the influence of replication fails (α parameter) and uneven partition as a consequence of multimer resolution fails (δ parameter), and the post-segregation killing factor (β parameter). All of these factors act at the same time and affect plasmid inheritance at different levels. In case of pEC156-derivatives we concluded that multimerization is a major determinant of plasmid stability. Our data indicate that even small changes in the fidelity of segregation can have serious effects on plasmid stability. Use of the proposed mathematical model can provide a valuable description of plasmid maintenance, as well as enable prediction of the probability of the plasmid loss. PMID:28846713

  9. PLANS: A finite element program for nonlinear analysis of structures. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Pifko, A.; Levine, H. S.; Armen, H., Jr.

    1975-01-01

    The PLANS system is described which is a finite element program for nonlinear analysis. The system represents a collection of special purpose computer programs each associated with a distinct physical problem class. Modules of PLANS specifically referenced and described in detail include: (1) REVBY, for the plastic analysis of bodies of revolution; (2) OUT-OF-PLANE, for the plastic analysis of 3-D built-up structures where membrane effects are predominant; (3) BEND, for the plastic analysis of built-up structures where bending and membrane effects are significant; (4) HEX, for the 3-D elastic-plastic analysis of general solids; and (5) OUT-OF-PLANE-MG, for material and geometrically nonlinear analysis of built-up structures. The SATELLITE program for data debugging and plotting of input geometries is also described. The theoretical foundations upon which the analysis is based are presented. Discussed are the form of the governing equations, the methods of solution, plasticity theories available, a general system description and flow of the programs, and the elements available for use.

  10. Theoretical analysis of coverage-dependent rotational hindrance of PF 3 adsorbed on Ru(001)

    NASA Astrophysics Data System (ADS)

    Kaji, H.; Kakitani, K.; Yagi, Y.; Yoshimori, A.

    1996-08-01

    Distribution of the azimuthal orientation of PF 3 molecules adsorbed on Ru(001) measured by ESDIAD shows interesting temperature and coverage dependences. It is interpreted in this analysis as due to the short range order in the locative distribution of the PF 3 molecules. Monte Carlo simulations are performed to obtain the temperature and coverage-dependent distribution of the adsorbed molecules. The distribution of the azimuthal orientation of the molecule is discussed on the basis of the obtained locative distribution of the molecules by using simple models for rotational hindrance to be compared with the experimental results.

  11. Spectroscopic and theoretical analysis of Pd2+-Cl--H2O system

    NASA Astrophysics Data System (ADS)

    Podborska, Agnieszka; Wojnicki, Marek

    2017-01-01

    Time dependent density functional theory (TD-DFT) and spectrophotometric methods were used for speciation analysis in Pd2+-Cl--H2O system. It was shown, that there is an excellent harmony between TD-DFT calculated UV-VIS spectra end those registered using spectrophotometric method. It was shown, that for simple electrolyte, a several different form of Pd(II) appears simultaneously. Thanks to TD-DFT method, it was possible to deconvolution experimental UV-VIS spectrum and determination which form of Pd(II) complexes are present in the solution.

  12. Stability analysis of BWR nuclear-coupled thermal-hyraulics using a simple model

    SciTech Connect

    Karve, A.A.; Rizwan-uddin; Dorning, J.J.

    1995-09-01

    A simple mathematical model is developed to describe the dynamics of the nuclear-coupled thermal-hydraulics in a boiling water reactor (BWR) core. The model, which incorporates the essential features of neutron kinetics, and single-phase and two-phase thermal-hydraulics, leads to simple dynamical system comprised of a set of nonlinear ordinary differential equations (ODEs). The stability boundary is determined and plotted in the inlet-subcooling-number (enthalpy)/external-reactivity operating parameter plane. The eigenvalues of the Jacobian matrix of the dynamical system also are calculated at various steady-states (fixed points); the results are consistent with those of the direct stability analysis and indicate that a Hopf bifurcation occurs as the stability boundary in the operating parameter plane is crossed. Numerical simulations of the time-dependent, nonlinear ODEs are carried out for selected points in the operating parameter plane to obtain the actual damped and growing oscillations in the neutron number density, the channel inlet flow velocity, and the other phase variables. These indicate that the Hopf bifurcation is subcritical, hence, density wave oscillations with growing amplitude could result from a finite perturbation of the system even where the steady-state is stable. The power-flow map, frequently used by reactor operators during start-up and shut-down operation of a BWR, is mapped to the inlet-subcooling-number/neutron-density (operating-parameter/phase-variable) plane, and then related to the stability boundaries for different fixed inlet velocities corresponding to selected points on the flow-control line. The stability boundaries for different fixed inlet subcooling numbers corresponding to those selected points, are plotted in the neutron-density/inlet-velocity phase variable plane and then the points on the flow-control line are related to their respective stability boundaries in this plane.

  13. A simple dosing scheme for intravenous busulfan based on retrospective population pharmacokinetic analysis in korean patients.

    PubMed

    Choe, Sangmin; Kim, Gayeong; Lim, Hyeong-Seok; Cho, Sang-Heon; Ghim, Jong-Lyul; Jung, Jin Ah; Kim, Un-Jib; Noh, Gyujeong; Bae, Kyun-Seop; Lee, Dongho

    2012-08-01

    Busulfan is an antineoplastic agent with a narrow therapeutic window. A post-hoc population pharmacokinetic analysis of a prospective randomized trial for comparison of four-times daily versus once-daily intravenous busulfan was carried out to search for predictive factors of intravenous busulfan (iBu) pharmacokinetics (PK). In this study the population PK of iBu was characterized to provide suitable dosing recommendations. Patients were randomized to receive iBu, either as 0.8 mg/kg every 6 h or 3.2 mg/kg daily over 4 days prior to hematopoietic stem cell transplantation. In total, 295 busulfan concentrations were analyzed with NONMEM. Actual body weight and sex were significant covariates affecting the PK of iBu. Sixty patients were included in the study (all Korean; 23 women, 37 men; mean [SD] age, 36.5 [10.9] years; weight, 66.5 [11.3] kg). Population estimates for a typical patient weighing 65 kg were: clearance (CL) 7.6 l/h and volume of distribution (V(d)) 32.2 l for men and 29.1 L for women. Inter-individual random variabilities of CL and V(d) were 16% and 9%. Based on a CL estimate from the final PK model, a simple dosage scheme to achieve the target AUC(0-inf) (defined as median AUC(0-inf) with a once-daily dosage) of 26.18 mg/l·hr, was proposed: 24.79·ABW(0.5) mg q24h, where ABW represents the actual body weight in kilograms. The dosing scheme reduced the unexplained interindividual variabilities of CL and Vd of iBu with ABW being a significant covariate affecting clearance of iBU. We propose a new simple dosing scheme for iBu based only on ABW.

  14. A Simple Dosing Scheme for Intravenous Busulfan Based on Retrospective Population Pharmacokinetic Analysis in Korean Patients

    PubMed Central

    Choe, Sangmin; Kim, Gayeong; Lim, Hyeong-Seok; Cho, Sang-Heon; Ghim, Jong-Lyul; Jung, Jin Ah; Kim, Un-Jib; Noh, Gyujeong; Lee, Dongho

    2012-01-01

    Busulfan is an antineoplastic agent with a narrow therapeutic window. A post-hoc population pharmacokinetic analysis of a prospective randomized trial for comparison of four-times daily versus once-daily intravenous busulfan was carried out to search for predictive factors of intravenous busulfan (iBu) pharmacokinetics (PK). In this study the population PK of iBu was characterized to provide suitable dosing recommendations. Patients were randomized to receive iBu, either as 0.8 mg/kg every 6 h or 3.2 mg/kg daily over 4 days prior to hematopoietic stem cell transplantation. In total, 295 busulfan concentrations were analyzed with NONMEM. Actual body weight and sex were significant covariates affecting the PK of iBu. Sixty patients were included in the study (all Korean; 23 women, 37 men; mean [SD] age, 36.5 [10.9] years; weight, 66.5 [11.3] kg). Population estimates for a typical patient weighing 65 kg were: clearance (CL) 7.6 l/h and volume of distribution (Vd) 32.2 l for men and 29.1 L for women. Inter-individual random variabilities of CL and Vd were 16% and 9%. Based on a CL estimate from the final PK model, a simple dosage scheme to achieve the target AUC0-inf (defined as median AUC0-inf with a once-daily dosage) of 26.18 mg/l·hr, was proposed: 24.79·ABW0.5 mg q24h, where ABW represents the actual body weight in kilograms. The dosing scheme reduced the unexplained interindividual variabilities of CL and Vd of iBu with ABW being a significant covariate affecting clearance of iBU. We propose a new simple dosing scheme for iBu based only on ABW. PMID:22915993

  15. A simple method of observation impact analysis for operational storm surge forecasting systems

    NASA Astrophysics Data System (ADS)

    Sumihar, Julius; Verlaan, Martin

    2016-04-01

    In this work, a simple method is developed for analyzing the impact of assimilating observations in improving forecast accuracy of a model. The method simply makes use of observation time series and the corresponding model output that are generated without data assimilation. These two time series are usually available in an operational database. The method is therefore easy to implement. Moreover, it can be used before actually implementing any data assimilation to the forecasting system. In this respect, it can be used as a tool for designing a data assimilation system, namely for searching for an optimal observing network. The method can also be used as a diagnostic tool, for example, for evaluating an existing operational data assimilation system to check if all observations are contributing positively to the forecast accuracy. The method has been validated with some twin experiments using a simple one-dimensional advection model as well as with an operational storm surge forecasting system based on the Dutch Continental Shelf model version 5 (DCSMv5). It has been applied for evaluating the impact of observations in the operational data assimilation system with DCSMv5 and for designing a data assimilation system for the new model DCSMv6. References: Verlaan, M. and J. Sumihar (2016), Observation impact analysis methods for storm surge forecasting systems, Ocean Dynamics, ODYN-D-15-00061R1 (in press) Zijl, F., J. Sumihar, and M. Verlaan (2015), Application of data assimilation for improved operational water level forecasting of the northwest European shelf and North Sea, Ocean Dynamics, 65, Issue 12, pp 1699-1716.

  16. Charge tunneling across strongly inhomogeneous potential barriers in metallic heterostructures: A simplified theoretical analysis and possible experimental tests

    NASA Astrophysics Data System (ADS)

    Belogolovskii, Mikhail

    2014-09-01

    Universal aspects of the charge transport through strongly disordered potential barriers in metallic heterojunctions are analyzed. A simple theoretical formalism for two kinds of transmission probability distribution functions valid for smooth tunneling barriers and those with abrupt boundaries is presented. We argue that their universality has simple mathematical origin and can arise in totally different physical contexts. Finally, we analyze possible applications of superconducting junctions to test the universality of transport characteristics in structurally disordered insulating films without any fitting parameters and point out that the proposed approach can be useful in understanding the dynamics of surface screening currents in superconductors with an inhomogeneous near-surface region.

  17. A Simple Method of Genomic DNA Extraction from Human Samples for PCR-RFLP Analysis

    PubMed Central

    Ghatak, Souvik; Muthukumaran, Rajendra Bose; Nachimuthu, Senthil Kumar

    2013-01-01

    Isolation of DNA from blood and buccal swabs in adequate quantities is an integral part of forensic research and analysis. The present study was performed to determine the quality and the quantity of DNA extracted from four commonly available samples and to estimate the time duration of the ensuing PCR amplification. Here, we demonstrate that hair and urine samples can also become an alternate source for reliably obtaining a small quantity of PCR-ready DNA. We developed a rapid, cost-effective, and noninvasive method of sample collection and simple DNA extraction from buccal swabs, urine, and hair using the phenol-chloroform method. Buccal samples were subjected to DNA extraction, immediately or after refrigeration (4–6°C) for 3 days. The purity and the concentration of the extracted DNA were determined spectrophotometerically, and the adequacy of DNA extracts for the PCR-based assay was assessed by amplifying a 1030-bp region of the mitochondrial D-loop. Although DNA from all the samples was suitable for PCR, the blood and hair samples provided a good quality DNA for restriction analysis of the PCR product compared with the buccal swab and urine samples. In the present study, hair samples proved to be a good source of genomic DNA for PCR-based methods. Hence, DNA of hair samples can also be used for the genomic disorder analysis in addition to the forensic analysis as a result of the ease of sample collection in a noninvasive manner, lower sample volume requirements, and good storage capability. PMID:24294115

  18. A simple method of genomic DNA extraction from human samples for PCR-RFLP analysis.

    PubMed

    Ghatak, Souvik; Muthukumaran, Rajendra Bose; Nachimuthu, Senthil Kumar

    2013-12-01

    Isolation of DNA from blood and buccal swabs in adequate quantities is an integral part of forensic research and analysis. The present study was performed to determine the quality and the quantity of DNA extracted from four commonly available samples and to estimate the time duration of the ensuing PCR amplification. Here, we demonstrate that hair and urine samples can also become an alternate source for reliably obtaining a small quantity of PCR-ready DNA. We developed a rapid, cost-effective, and noninvasive method of sample collection and simple DNA extraction from buccal swabs, urine, and hair using the phenol-chloroform method. Buccal samples were subjected to DNA extraction, immediately or after refrigeration (4-6°C) for 3 days. The purity and the concentration of the extracted DNA were determined spectrophotometerically, and the adequacy of DNA extracts for the PCR-based assay was assessed by amplifying a 1030-bp region of the mitochondrial D-loop. Although DNA from all the samples was suitable for PCR, the blood and hair samples provided a good quality DNA for restriction analysis of the PCR product compared with the buccal swab and urine samples. In the present study, hair samples proved to be a good source of genomic DNA for PCR-based methods. Hence, DNA of hair samples can also be used for the genomic disorder analysis in addition to the forensic analysis as a result of the ease of sample collection in a noninvasive manner, lower sample volume requirements, and good storage capability.

  19. Analysis system for characterisation of simple, low-cost microfluidic components

    NASA Astrophysics Data System (ADS)

    Smith, Suzanne; Naidoo, Thegaran; Nxumalo, Zandile; Land, Kevin; Davies, Emlyn; Fourie, Louis; Marais, Philip; Roux, Pieter

    2014-06-01

    There is an inherent trade-off between cost and operational integrity of microfluidic components, especially when intended for use in point-of-care devices. We present an analysis system developed to characterise microfluidic components for performing blood cell counting, enabling the balance between function and cost to be established quantitatively. Microfluidic components for sample and reagent introduction, mixing and dispensing of fluids were investigated. A simple inlet port plugging mechanism is used to introduce and dispense a sample of blood, while a reagent is released into the microfluidic system through compression and bursting of a blister pack. Mixing and dispensing of the sample and reagent are facilitated via air actuation. For these microfluidic components to be implemented successfully, a number of aspects need to be characterised for development of an integrated point-of-care device design. The functional components were measured using a microfluidic component analysis system established in-house. Experiments were carried out to determine: 1. the force and speed requirements for sample inlet port plugging and blister pack compression and release using two linear actuators and load cells for plugging the inlet port, compressing the blister pack, and subsequently measuring the resulting forces exerted, 2. the accuracy and repeatability of total volumes of sample and reagent dispensed, and 3. the degree of mixing and dispensing uniformity of the sample and reagent for cell counting analysis. A programmable syringe pump was used for air actuation to facilitate mixing and dispensing of the sample and reagent. Two high speed cameras formed part of the analysis system and allowed for visualisation of the fluidic operations within the microfluidic device. Additional quantitative measures such as microscopy were also used to assess mixing and dilution accuracy, as well as uniformity of fluid dispensing - all of which are important requirements towards the

  20. Vibration Control Of A Flexible Beam Using A Rotational Internal Resonance Controller, Part I: Theoretical Development And Analysis

    NASA Astrophysics Data System (ADS)

    Tuer, K. L.; Duquette, A. P.; Golnaraghi, M. F.

    1993-10-01

    In this paper, an unconventional technique to control the vibrations of a cantilevered flexible beam, based on a non-linear dynamics phenomenon known as internal resonance , is proposed. The controller consists of a DC motor, itself a part of a simple regulated feedback system, with a rigid beam/tip mass configuration attached to the motor shaft. The addition of the controller to the tip of the flexible beam introduces quadratic, dynamic non-linearities into an otherwise linear system. Under the proper circumstances, these non-linearities can be used to generate a coupling effect between the modes of vibration of the system.An internally resonant state exists if the equations of motion are characterized by frequency-amplitude interactions and the first two natural frequencies of the linear portion of the non-linear equations of motion are commensurable or nearly commensurable. Once a resonance condition is established, a transfer of energy transpires between the modes of vibration. Thus, due to modal coupling, energy is, in effect, transferred from the flexible beam to the secondary beam, where it is dissipated through velocity feedback of the motor.Theoretical analysis predicts that the planar oscillations of a cantilevered beam displaced at its tip a distance equal to 18 percent of its length can be reduced to a relatively small amplitude in approximately four cycles. This controller has proven to be most effective in controlling large amplitude, low frequency oscillations which are typical for large flexible structures.

  1. CAPTURING SUBJECT VARIABILITY IN FMRI DATA : A GRAPH-THEORETICAL ANALYSIS OF GICA VS. IVA

    PubMed Central

    Laney, Jonathan; Westlake, Kelly; Ma, Sai; Woytowicz, Elizabeth; Calhoun, Vince D.; Adalı, Tülay

    2016-01-01

    Background Recent studies using simulated functional magnetic resonance imaging (fMRI) data show that independent vector analysis (IVA) is a superior solution for capturing spatial subject variability when compared with the widely used group independent component analysis (GICA). Retaining such variability is of fundamental importance for identifying spatially localized group differences in intrinsic brain networks. New Methods Few studies on capturing subject variability and order selection have evaluated real fMRI data. Comparison of multivariate components generated by multiple algorithms is not straightforward. The main difficulties are finding concise methods to extract meaningful features and comparing multiple components despite lack of a ground truth. In this paper, we present a graph-theoretic approach to effectively compare the ability of multiple multivariate algorithms to capture subject variability for real fMRI data for effective group comparisons. Results Discriminating trends in features calculated from IVA- and GICA-generated components show that IVA better preserves the qualities of centrality and small worldness in fMRI data. IVA also produced components with more activated voxels leading to larger area under the curve (AUC) values. Comparison with Existing Method IVA is compared with widely used GICA for the purpose of group discrimination in terms of graph-theoretic features. In addition, masks are applied for motor related components generated by both algorithms. Conclusions Results show IVA better captures subject variability producing more activated voxels and generating components with less mutual information in the spatial domain than Group ICA. IVA-generated components result in smaller p-values and clearer trends in graph-theoretic features. PMID:25797843

  2. Analysis of the genetic diversity of Lonicera japonica Thumb. using inter-simple sequence repeat markers.

    PubMed

    He, H Y; Zhang, D; Qing, H; Yang, Y

    2017-01-23

    Inter-simple sequence repeats (ISSRs) were used to analyze the genetic diversity of 21 accessions obtained from four provinces in China, Shandong, Henan, Hebei, and Sichuan. A total of 272 scored bands were generated using the eight primers previously screened across 21 accessions, of which 267 were polymorphic (98.16%). Genetic similarity coefficients varied from 0.4816 to 0.9118, with an average of 0.6337. The UPGMA dendrogram grouped 21 accessions into two main clusters. Cluster A comprised four Lonicera macranthoides Hand. Mazz. accessions, of which J10 was found to be from Sichuan, and J17, J18, and J19 were found to be from Shandong. Cluster B comprised 17 Lonicera japonica Thumb. accessions, divided into the wild accession J16 and the other 16 cultivars. The results of the principal component analysis were comparable to the cluster analysis. Therefore, the ISSR markers could be effectively used to distinguish interspecific and intraspecific variations, which may facilitate identification of Lonicera japonica cultivars for planting, medicinal use, and germplasm conservation.

  3. Genome-Wide Analysis of Simple Sequence Repeats in Bitter Gourd (Momordica charantia).

    PubMed

    Cui, Junjie; Cheng, Jiaowen; Nong, Dingguo; Peng, Jiazhu; Hu, Yafei; He, Weiming; Zhou, Qianjun; Dhillon, Narinder P S; Hu, Kailin

    2017-01-01

    Bitter gourd (Momordica charantia) is widely cultivated as a vegetable and medicinal herb in many Asian and African countries. After the sequencing of the cucumber (Cucumis sativus), watermelon (Citrullus lanatus), and melon (Cucumis melo) genomes, bitter gourd became the fourth cucurbit species whose whole genome was sequenced. However, a comprehensive analysis of simple sequence repeats (SSRs) in bitter gourd, including a comparison with the three aforementioned cucurbit species has not yet been published. Here, we identified a total of 188,091 and 167,160 SSR motifs in the genomes of the bitter gourd lines 'Dali-11' and 'OHB3-1,' respectively. Subsequently, the SSR content, motif lengths, and classified motif types were characterized for the bitter gourd genomes and compared among all the cucurbit genomes. Lastly, a large set of 138,727 unique in silico SSR primer pairs were designed for bitter gourd. Among these, 71 primers were selected, all of which successfully amplified SSRs from the two bitter gourd lines 'Dali-11' and 'K44'. To further examine the utilization of unique SSR primers, 21 SSR markers were used to genotype a collection of 211 bitter gourd lines from all over the world. A model-based clustering method and phylogenetic analysis indicated a clear separation among the geographic groups. The genomic SSR markers developed in this study have considerable potential value in advancing bitter gourd research.

  4. A Simple Force-Motion Relation for Migrating Cells Revealed by Multipole Analysis of Traction Stress

    PubMed Central

    Tanimoto, Hirokazu; Sano, Masaki

    2014-01-01

    For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. PMID:24411233

  5. A simple force-motion relation for migrating cells revealed by multipole analysis of traction stress.

    PubMed

    Tanimoto, Hirokazu; Sano, Masaki

    2014-01-07

    For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. A simple and sensitive spectrofluorimetric method for analysis of some nitrofuran drugs in pharmaceutical preparations.

    PubMed

    Belal, Tarek Saied

    2008-09-01

    A simple, rapid, selective and sensitive spectrofluorimetric method was described for the analysis of three nitrofuran drugs, namely, nifuroxazide (NX), nitrofurantoin (NT) and nitrofurazone (NZ). The method involved the alkaline hydrolysis of the studied drugs by warming with 0.1 M sodium hydroxide solution then dilution with distilled water for NX or 2-propanol for NT and NZ. The formed fluorophores were measured at 465 nm (lambda (Ex) 265 nm), 458 nm (lambda (Ex) 245 nm) and 445 nm (lambda (Ex) 245 nm) for NX, NT and NZ, respectively. The reaction pathway was discussed and the structures of the fluorescent products were proposed. The different experimental parameters were studied and optimized. Regression analysis showed good correlation between fluorescence intensity and concentration over the ranges 0.08-1.00, 0.02-0.24 and 0.004-0.050 microg ml(-1) for NX, NT and NZ, respectively. The limits of detection of the method were 8.0, 1.9 and 0.3 ng ml(-1) for NX, NT and NZ, respectively. The proposed method was validated in terms of accuracy, precision and specificity, and it was successfully applied for the assay of the three nitrofurans in their different dosage forms. No interference was observed from common pharmaceutical adjuvants. The results were favorably compared with those obtained by reference spectrophotometric methods.

  7. Genome-Wide Analysis of Simple Sequence Repeats in Bitter Gourd (Momordica charantia)

    PubMed Central

    Cui, Junjie; Cheng, Jiaowen; Nong, Dingguo; Peng, Jiazhu; Hu, Yafei; He, Weiming; Zhou, Qianjun; Dhillon, Narinder P. S.; Hu, Kailin

    2017-01-01

    Bitter gourd (Momordica charantia) is widely cultivated as a vegetable and medicinal herb in many Asian and African countries. After the sequencing of the cucumber (Cucumis sativus), watermelon (Citrullus lanatus), and melon (Cucumis melo) genomes, bitter gourd became the fourth cucurbit species whose whole genome was sequenced. However, a comprehensive analysis of simple sequence repeats (SSRs) in bitter gourd, including a comparison with the three aforementioned cucurbit species has not yet been published. Here, we identified a total of 188,091 and 167,160 SSR motifs in the genomes of the bitter gourd lines ‘Dali-11’ and ‘OHB3-1,’ respectively. Subsequently, the SSR content, motif lengths, and classified motif types were characterized for the bitter gourd genomes and compared among all the cucurbit genomes. Lastly, a large set of 138,727 unique in silico SSR primer pairs were designed for bitter gourd. Among these, 71 primers were selected, all of which successfully amplified SSRs from the two bitter gourd lines ‘Dali-11’ and ‘K44’. To further examine the utilization of unique SSR primers, 21 SSR markers were used to genotype a collection of 211 bitter gourd lines from all over the world. A model-based clustering method and phylogenetic analysis indicated a clear separation among the geographic groups. The genomic SSR markers developed in this study have considerable potential value in advancing bitter gourd research. PMID:28690629

  8. Theoretical foundations for finite-time transient stability and sensitivity analysis of power systems

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sambarta

    Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of

  9. Flying high: a theoretical analysis of the factors limiting exercise performance in birds at altitude.

    PubMed

    Scott, Graham R; Milsom, William K

    2006-11-01

    The ability of some bird species to fly at extreme altitude has fascinated comparative respiratory physiologists for decades, yet there is still no consensus about what adaptations enable high altitude flight. Using a theoretical model of O(2) transport, we performed a sensitivity analysis of the factors that might limit exercise performance in birds. We found that the influence of individual physiological traits on oxygen consumption (Vo2) during exercise differed between sea level, moderate altitude, and extreme altitude. At extreme altitude, haemoglobin (Hb) O(2) affinity, total ventilation, and tissue diffusion capacity for O(2) (D(To2)) had the greatest influences on Vo2; increasing these variables should therefore have the greatest adaptive benefit for high altitude flight. There was a beneficial interaction between D(To2) and the P(50) of Hb, such that increasing D(To2) had a greater influence on Vo2 when P(50) was low. Increases in the temperature effect on P(50) could also be beneficial for high flying birds, provided that cold inspired air at extreme altitude causes a substantial difference in temperature between blood in the lungs and in the tissues. Changes in lung diffusion capacity for O(2), cardiac output, blood Hb concentration, the Bohr coefficient, or the Hill coefficient likely have less adaptive significance at high altitude. Our sensitivity analysis provides theoretical suggestions of the adaptations most likely to promote high altitude flight in birds and provides direction for future in vivo studies.

  10. Graph theoretic network analysis reveals protein pathways underlying cell death following neurotropic viral infection

    PubMed Central

    Ghosh, Sourish; Kumar, G. Vinodh; Basu, Anirban; Banerjee, Arpan

    2015-01-01

    Complex protein networks underlie any cellular function. Certain proteins play a pivotal role in many network configurations, disruption of whose expression proves fatal to the cell. An efficient method to tease out such key proteins in a network is still unavailable. Here, we used graph-theoretic measures on protein-protein interaction data (interactome) to extract biophysically relevant information about individual protein regulation and network properties such as formation of function specific modules (sub-networks) of proteins. We took 5 major proteins that are involved in neuronal apoptosis post Chandipura Virus (CHPV) infection as seed proteins in a database to create a meta-network of immediately interacting proteins (1st order network). Graph theoretic measures were employed to rank the proteins in terms of their connectivity and the degree upto which they can be organized into smaller modules (hubs). We repeated the analysis on 2nd order interactome that includes proteins connected directly with proteins of 1st order. FADD and Casp-3 were connected maximally to other proteins in both analyses, thus indicating their importance in neuronal apoptosis. Thus, our analysis provides a blueprint for the detection and validation of protein networks disrupted by viral infections. PMID:26404759

  11. Growth and spectral analysis of piperazinium L-tartrate salt: A combined experimental and theoretical approach

    NASA Astrophysics Data System (ADS)

    Mathammal, R.; Sudha, N.; Shankar, R.; Rajaboopathi, M.; Janagi, S.; Prabavathi, B.

    2017-03-01

    This report discusses crystal structure, molecular arrangements, vibrational analysis, UV-Vis-NIR spectrum, fluorescence emission and second harmonic generation (SHG) efficiency of piperazinium L-tartrate (PPZ2+·Tart2-) crystals with the support of theoretical analysis. A good optical quality PPZ2+·Tart2- crystals were grown from slow evaporation of aqueous solution. The PPZ2+·Tart2- crystal belongs to monoclinic system with non-centrosymmetric space group P21. The charge transfer from donor to acceptor moieties and corresponding changes in the bond lengths and bond angles have been observed. The observed functional group vibrations in the experimental FTIR and the Raman spectrum were assigned and compared with theoretical wavenumbers of PPZ2+·Tart2-.The electron distribution on the donor and acceptor in PPZ2+·Tart2- has been clearly visualised using molecular electrostatic potential map. Compared with L-tartaric acid, red shift was observed in absorption and fluorescence spectrum. The low value of dielectric constant and dielectric loss at the higher frequency and its high second harmonic efficiency suggest PPZ2+·Tart2- crystal is less defect free and suitable for NLO applications.

  12. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  13. Theoretical analysis of maximum flow declination rate versus maximum area declination rate in phonation.

    PubMed

    Titze, Ingo R

    2006-04-01

    Maximum flow declination rate (MFDR) in the glottis is known to correlate strongly with vocal intensity in voicing. This declination, or negative slope on the glottal airflow waveform, is in part attributable to the maximum area declination rate (MADR) and in part to the overall inertia of the air column of the vocal tract (lungs to lips). The purpose of this theoretical study was to show the possible contributions of air inertance and MADR to MFDR. A simplified computational model of the kinematics of vocal fold movement was utilized to compute a glottal area function. The glottal flow was computed interactively with lumped vocal tract parameters in the form of resistance and inertive reactance. It was shown that MADR depends almost entirely on the ratio of vibrational amplitudes of the lower to upper margins of the vocal fold tissue. Adduction, vertical phase difference, and prephonatory convergence of the glottis have a lesser effect on MADR. A relatively simple rule was developed that relates MFDR to a vibrational amplitude ratio and vocal tract inertance. It was concluded that speakers and singers have multiple options for control of intensity, some of which involve more source-filter interaction than others.

  14. NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.

  15. A Thematic Analysis of Theoretical Models for Translational Science in Nursing: Mapping the Field

    PubMed Central

    Mitchell, Sandra A.; Fisher, Cheryl A.; Hastings, Clare E.; Silverman, Leanne B.; Wallen, Gwenyth R.

    2010-01-01

    Background The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. Purpose This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Method Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes; (2) strategic change to promote adoption of new knowledge; (3) knowledge exchange and synthesis for application and inquiry; (4) designing and interpreting dissemination research. Discussion This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. Conclusions A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes. PMID:21074646

  16. A thematic analysis of theoretical models for translational science in nursing: mapping the field.

    PubMed

    Mitchell, Sandra A; Fisher, Cheryl A; Hastings, Clare E; Silverman, Leanne B; Wallen, Gwenyth R

    2010-01-01

    The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes, (2) strategic change to promote adoption of new knowledge, (3) knowledge exchange and synthesis for application and inquiry, and (4) designing and interpreting dissemination research. This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes.

  17. Theoretical and experimental analysis of bispectrum of vibration signals for fault diagnosis of gears

    NASA Astrophysics Data System (ADS)

    Guoji, Shen; McLaughlin, Stephen; Yongcheng, Xu; White, Paul

    2014-02-01

    Condition monitoring and fault diagnosis is an important issue for gearbox maintenance and safety. The critical process involved in such activities is to extract reliable features representative of the condition of the gears or gearbox. In this paper a framework is presented for the application of bispectrum to the analysis of gearbox vibration. The bispectrum of a composite signal consisting of multiple periodic components has peaks at the bifrequencies that correspond to closely related components which can be produced by any nonlinearity. As a result, biphase verification is necessary to decrease false-alarming for any bispectrum-based method. A model based on modulated signals is adopted to reveal the bispectrum characteristics for the vibration of a faulty gear, and the corresponding amplitude and phase of the bispectrum expression are deduced. Therefore, a diagnostic approach based on the theoretical result is derived and verified by the analysis of a set of vibration signals from a helicopter gearbox.

  18. Elastic responses of underground circular arches considering dynamic soil-structure interaction: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Long; Jin, Feng-Nian; Fan, Hua-Lin

    2013-02-01

    Due to the wide applications of arches in underground protective structures, dynamic analysis of circular arches including soil-structure interactions is important. In this paper, an exact solution of the forced vibration of circular arches subjected to subsurface denotation forces is obtained. The dynamic soil-structure interaction is considered with the introduction of an interfacial damping between the structure element and the surrounding soil into the equation of motion. By neglecting the influences of shear, rotary inertia and tangential forces and assuming the arch incompressible, the equations of motion of the buried arches were set up. Analytical solutions of the dynamic responses of the protective arches were deduced by means of modal superposition. Arches with different opening angles, acoustic impedances and rise-span ratios were analyzed to discuss their influences on an arch. The theoretical analysis suggests blast loads for elastic designs and predicts the potential failure modes for buried protective arches.

  19. Open Source Tools for the Information Theoretic Analysis of Neural Data

    PubMed Central

    Ince, Robin A. A.; Mazzoni, Alberto; Petersen, Rasmus S.; Panzeri, Stefano

    2009-01-01

    The recent and rapid development of open source software tools for the analysis of neurophysiological datasets consisting of simultaneous multiple recordings of spikes, field potentials and other neural signals holds the promise for a significant advance in the standardization, transparency, quality, reproducibility and variety of techniques used to analyze neurophysiological data and for the integration of information obtained at different spatial and temporal scales. In this review we focus on recent advances in open source toolboxes for the information theoretic analysis of neural responses. We also present examples of their use to investigate the role of spike timing precision, correlations across neurons, and field potential fluctuations in the encoding of sensory information. These information toolboxes, available both in MATLAB and Python programming environments, hold the potential to enlarge the domain of application of information theory to neuroscience and to lead to new discoveries about how neurons encode and transmit information. PMID:20730105

  20. Structural modeling and analysis of an effluent treatment process for electroplating--a graph theoretic approach.

    PubMed

    Kumar, Abhishek; Clement, Shibu; Agrawal, V P

    2010-07-15

    An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.

  1. Analysis of the genetic diversity of beach plums by simple sequence repeat markers.

    PubMed

    Wang, X M; Wu, W L; Zhang, C H; Zhang, Y P; Li, W L; Huang, T

    2015-08-19

    The purpose of this study was to measure the genetic diversity of wild beach plum and cultivated species, and to determine the species relationships using SSRs markers. An analysis of genetic diversity from ten beach plum germplasms was carried out using 11 simple sequence repeat (SSR) primers selected from 35 primers to generate distinct PCR products. From this plant material, 44 allele variations were detected, with 3-5 alleles identified from each primer. The analysis showed that the genetic similarity coefficient varied from 0.721 ± 0.155 to 0.848 ± 0.136 within each of the ten beach plum germplasms and changed within the range of 0.551 ± 0.084 to 0.695 ± 0.073 between any two pairs of germplasms. According to the genetic dissimilarity coefficient matrix, a cluster analysis of SSRs using the unweighted pair group mean average method in the NTSYSpc 2.10 software revealed that the ten germplasms could be divided into two groups at the dissimilarity coefficient of 0.606. Class I included 77.8, 12.5, 30, and 33.3% of MM, MI, NY, and CM, respectively. Class II contains the remaining 9 beach plum germplasms. The markers generated by 11 SSR primers proved very effective in distinguishing the beach plum germplasm resources. It was clear that the geographical distribution did not correspond with the genetic relationships among the different beach plum strains. This result will be of value to beach plum breeding programs.

  2. Dependence of velocity fluctuations on solar wind speeds: A simple analysis with IPS method

    NASA Technical Reports Server (NTRS)

    Misawa, H.; Kojima, M.

    1995-01-01

    A number of theoretical works have suggested that MHD plasma fluctuations in solar winds should play an important role particularly in the acceleration of high speed winds inside or near 0.1 AU from the sun. Since velocity fluctuations in solar winds are expected to be caused by the MHD plasma fluctuations, measurements of the velocity fluctuations give clues to reveal the acceleration process of solar winds. We made interplanetary scintillation (IPS) observations at the region out of 0.1 AU to investigate dependence of velocity fluctuations on flow speeds. For evaluating the velocity fluctuation of a flow, we selected the IPS data-set acquired at 2 separate antennas which located in the projected flow direction onto the baseline plane, and tried to compare skewness of the observed cross correlation function(CCF) with skewness of modeled CCFs in which velocity fluctuations were parametrized. The integration effect of IPS along a ray path was also taken into account in the estimation of modeled CCFs. Although this analysis method is significant to derive only parallel fluctuation components to the flow directions, preliminary analyses show following results: (1) High speed winds (Vsw greater than or equal to 500 km/s out of 0.3 AU) indicate enhancement of velocity fluctuations near 0.1 AU; and (2) Low speed winds (Vsw less than or equal to 400 Km/s out of 0.3 AU) indicate small velocity fluctuations at any distances.

  3. Dependence of velocity fluctuations on solar wind speeds: A simple analysis with IPS method

    NASA Technical Reports Server (NTRS)

    Misawa, H.; Kojima, M.

    1995-01-01

    A number of theoretical works have suggested that MHD plasma fluctuations in solar winds should play an important role particularly in the acceleration of high speed winds inside or near 0.1 AU from the sun. Since velocity fluctuations in solar winds are expected to be caused by the MHD plasma fluctuations, measurements of the velocity fluctuations give clues to reveal the acceleration process of solar winds. We made interplanetary scintillation (IPS) observations at the region out of 0.1 AU to investigate dependence of velocity fluctuations on flow speeds. For evaluating the velocity fluctuation of a flow, we selected the IPS data-set acquired at 2 separate antennas which located in the projected flow direction onto the baseline plane, and tried to compare skewness of the observed cross correlation function(CCF) with skewness of modeled CCFs in which velocity fluctuations were parametrized. The integration effect of IPS along a ray path was also taken into account in the estimation of modeled CCFs. Although this analysis method is significant to derive only parallel fluctuation components to the flow directions, preliminary analyses show following results: (1) High speed winds (Vsw greater than or equal to 500 km/s out of 0.3 AU) indicate enhancement of velocity fluctuations near 0.1 AU; and (2) Low speed winds (Vsw less than or equal to 400 Km/s out of 0.3 AU) indicate small velocity fluctuations at any distances.

  4. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  5. Genetic diversity analysis of okra (Abelmoschus esculentus L.) by inter-simple sequence repeat (ISSR) markers.

    PubMed

    Yuan, C Y; Zhang, C; Wang, P; Hu, S; Chang, H P; Xiao, W J; Lu, X T; Jiang, S B; Ye, J Z; Guo, X H

    2014-04-25

    Okra (Abelmoschus esculentus L.) is not only a nutrient-rich vegetable but also an important medicinal herb. Inter-simple sequence repeat (ISSR) markers were employed to investigate the genetic diversity and differentiation of 24 okra genotypes. In this study, the PCR products were separated by electrophoresis on 8% nondenaturing polyacrylamide gel and visualized by silver staining. The 22 ISSR primers produced 289 amplified DNA fragments, and 145 (50%) fragments were polymorphic. The 289 markers were used to construct the dendrogram based on the unweighted pair-group method with arithmetic average (UPGMA) cluster analysis. The dendrogram indicated that 24 okras were clustered into 4 geographically distinct groups. The average polymorphism information content (PIC) was 0.531929, which showed that the majority of primers were informative. The high values of allele frequency, genetic diversity, and heterozygosity showed that primer-sample combinations produced measurable fragments. The mean distances ranged from 0.045455 to 0.454545. The dendrogram indicated that the ISSR markers succeeded in distinguishing most of the 24 varieties in relation to their genetic backgrounds and geographical origins.

  6. Final Analysis and Results of the Phase II SIMPLE Dark Matter Search

    NASA Astrophysics Data System (ADS)

    Felizardo, M.; Girard, T. A.; Morlat, T.; Fernandes, A. C.; Ramos, A. R.; Marques, J. G.; Kling, A.; Puibasset, J.; Auguste, M.; Boyer, D.; Cavaillou, A.; Poupeney, J.; Sudre, C.; Miley, H. S.; Payne, R. F.; Carvalho, F. P.; Prudêncio, M. I.; Gouveia, A.; Marques, R.

    2012-05-01

    We report the final results of the Phase II SIMPLE measurements, comprising two run stages of 15 superheated droplet detectors each, with the second stage including an improved neutron shielding. The analyses include a refined signal analysis, and revised nucleation efficiency based on a reanalysis of previously reported monochromatic neutron irradiations. The combined results yield a contour minimum of σp=5.7×10-3pb at 35GeV/c2 in the spin-dependent sector of weakly interacting massive particle (WIMP) proton interactions, the most restrictive to date for MW≤60GeV/c2 from a direct search experiment and overlapping, for the first time, with results previously obtained only indirectly. In the spin-independent sector, a minimum of 4.7×10-6pb at 35GeV/c2 is achieved, with the exclusion contour challenging a significant part of the light mass WIMP region of current interest.

  7. Simple cost analysis of a rural dental training facility in Australia.

    PubMed

    Lalloo, Ratilal; Massey, Ward

    2013-06-01

    Student clinical placements away from the university dental school clinics are an integral component of dental training programs. In 2009, the School of Dentistry and Oral Health, Griffith University, commenced a clinical placement in a remote rural community in Australia. This paper presents a simple cost analysis of the project from mid-2008 to mid-2011. All expenditures of the project are audited by the Financial and Planning Services unit of the university. The budget was divided into capital and operational costs, and the latter were further subdivided into salary and non-salary costs, and these were further analysed for the various types of expenditures incurred. The value of the treatments provided and income generated is also presented. Remote rural placements have additional (to the usual university dental clinic) costs in terms of salary incentives, travel, accommodation and subsistence support. However, the benefits of the placement to both the students and the local community might outweigh the additional costs of the placement. Because of high costs of rural student clinical placements, the financial support of partners, including the local Shire Council, state/territory and Commonwealth governments, is crucial in the establishment and ongoing sustainability of rural dental student clinical placements. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.

  8. Modeling and analysis of smart piezoelectric beams using simple higher order shear deformation theory

    NASA Astrophysics Data System (ADS)

    Adnan Elshafei, M.; Alraiess, Fuzy

    2013-03-01

    In the current work, a finite element formulation is developed for modeling and analysis of isotropic as well as orthotropic composite beams with distributed piezoelectric actuators subjected to both mechanical and electrical loads. The proposed model is developed based on a simple higher order shear deformation theory where the displacement field equations in the model account for a parabolic distribution of the shear strain and the nonlinearity of in-plane displacements across the thickness and subsequently the shear correction factor is not involved. The virtual displacement method is used to formulate the equations of motion of the structure system. The model is valid for both segmented and continuous piezoelectric elements, which can be either surface bonded or embedded in the laminated beams. A two-node element with four mechanical degrees of freedom in addition to one electrical degree of freedom for each node is used in the finite element formulation. The electric potential is considered as a function of the thickness and the length of the beam element. A MATLAB code is developed to compute the static deformation and free vibration parameters of the beams with distributed piezoelectric actuators. The obtained results from the proposed model are compared with the available analytical results and the finite element results of other researchers.

  9. Experimental and Theoretical Modal Analysis of Full-Sized Wood Composite Panels Supported on Four Nodes

    PubMed Central

    Guan, Cheng; Zhang, Houjiang; Wang, Xiping; Miao, Hu; Zhou, Lujing; Liu, Fenglu

    2017-01-01

    Key elastic properties of full-sized wood composite panels (WCPs) must be accurately determined not only for safety, but also serviceability demands. In this study, the modal parameters of full-sized WCPs supported on four nodes were analyzed for determining the modulus of elasticity (E) in both major and minor axes, as well as the in-plane shear modulus of panels by using a vibration testing method. The experimental modal analysis was conducted on three full-sized medium-density fiberboard (MDF) and three full-sized particleboard (PB) panels of three different thicknesses (12, 15, and 18 mm). The natural frequencies and mode shapes of the first nine modes of vibration were determined. Results from experimental modal testing were compared with the results of a theoretical modal analysis. A sensitivity analysis was performed to identify the sensitive modes for calculating E (major axis: Ex and minor axis: Ey) and the in-plane shear modulus (Gxy) of the panels. Mode shapes of the MDF and PB panels obtained from modal testing are in a good agreement with those from theoretical modal analyses. A strong linear relationship exists between the measured natural frequencies and the calculated frequencies. The frequencies of modes (2, 0), (0, 2), and (2, 1) under the four-node support condition were determined as the characteristic frequencies for calculation of Ex, Ey, and Gxy of full-sized WCPs. The results of this study indicate that the four-node support can be used in free vibration test to determine the elastic properties of full-sized WCPs. PMID:28773043

  10. The beauty of simple adaptive control and new developments in nonlinear systems stability analysis

    SciTech Connect

    Barkana, Itzhak

    2014-12-10

    Although various adaptive control techniques have been around for a long time and in spite of successful proofs of stability and even successful demonstrations of performance, the eventual use of adaptive control methodologies in practical real world systems has met a rather strong resistance from practitioners and has remained limited. Apparently, it is difficult to guarantee or even understand the conditions that can guarantee stable operations of adaptive control systems under realistic operational environments. Besides, it is difficult to measure the robustness of adaptive control system stability and allow it to be compared with the common and widely used measure of phase margin and gain margin that is utilized by present, mainly LTI, controllers. Furthermore, customary stability analysis methods seem to imply that the mere stability of adaptive systems may be adversely affected by any tiny deviation from the pretty idealistic and assumably required stability conditions. This paper first revisits the fundamental qualities of customary direct adaptive control methodologies, in particular the classical Model Reference Adaptive Control, and shows that some of their basic drawbacks have been addressed and eliminated within the so-called Simple Adaptive Control methodology. Moreover, recent developments in the stability analysis methods of nonlinear systems show that prior conditions that were customarily assumed to be needed for stability are only apparent and can be eliminated. As a result, sufficient conditions that guarantee stability are clearly stated and lead to similarly clear proofs of stability. As many real-world applications show, once robust stability of the adaptive systems can be guaranteed, the added value of using Add-On Adaptive Control along with classical Control design techniques is pushing the desired performance beyond any previous limits.

  11. Finite-fault source inversion using teleseismic P waves: Simple parameterization and rapid analysis

    USGS Publications Warehouse

    Mendoza, C.; Hartzell, S.

    2013-01-01

    We examine the ability of teleseismic P waves to provide a timely image of the rupture history for large earthquakes using a simple, 2D finite‐fault source parameterization. We analyze the broadband displacement waveforms recorded for the 2010 Mw∼7 Darfield (New Zealand) and El Mayor‐Cucapah (Baja California) earthquakes using a single planar fault with a fixed rake. Both of these earthquakes were observed to have complicated fault geometries following detailed source studies conducted by other investigators using various data types. Our kinematic, finite‐fault analysis of the events yields rupture models that similarly identify the principal areas of large coseismic slip along the fault. The results also indicate that the amount of stabilization required to spatially smooth the slip across the fault and minimize the seismic moment is related to the amplitudes of the observed P waveforms and can be estimated from the absolute values of the elements of the coefficient matrix. This empirical relationship persists for earthquakes of different magnitudes and is consistent with the stabilization constraint obtained from the L‐curve in Tikhonov regularization. We use the relation to estimate the smoothing parameters for the 2011 Mw 7.1 East Turkey, 2012 Mw 8.6 Northern Sumatra, and 2011 Mw 9.0 Tohoku, Japan, earthquakes and invert the teleseismic P waves in a single step to recover timely, preliminary slip models that identify the principal source features observed in finite‐fault solutions obtained by the U.S. Geological Survey National Earthquake Information Center (USGS/NEIC) from the analysis of body‐ and surface‐wave data. These results indicate that smoothing constraints can be estimated a priori to derive a preliminary, first‐order image of the coseismic slip using teleseismic records.

  12. In Vitro Cell Death Discrimination and Screening Method by Simple and Cost-Effective Viability Analysis.

    PubMed

    Helm, Katharina; Beyreis, Marlena; Mayr, Christian; Ritter, Markus; Jakab, Martin; Kiesslich, Tobias; Plaetzer, Kristjan

    2017-01-01

    For in vitro cytotoxicity testing, discrimination of apoptosis and necrosis represents valuable information. Viability analysis performed at two different time points post treatment could serve such a purpose because the dynamics of metabolic activity of apoptotic and necrotic cells is different, i.e. a more rapid decline of cellular metabolism during necrosis whereas cellular metabolism is maintained during the entire execution phase of apoptosis. This study describes a straightforward approach to distinguish apoptosis and necrosis. A431 human epidermoid carcinoma cells were treated with different concentrations/doses of actinomycin D (Act-D), 4,5,6,7-tetrabromo-2-azabenzimidazole (TBB), Ro 31-8220, H2O2 and photodynamic treatment (PDT). The resazurin viability signal was recorded at 2 and 24 hrs post treatment. Apoptosis and necrosis were verified by measuring caspase 3/7 and membrane integrity. Calculation of the difference curve between the 2 and 24 hrs resazurin signals yields the following information: a positive difference signal indicates apoptosis (i.e. high metabolic activity at early time points and low signal at 24 hrs post treatment) while an early reduction of the viability signal indicates necrosis. For all treatments, this dose-dependent sequence of cellular responses could be confirmed by independent assays. Simple and cost-effective viability analysis provides reliable information about the dose ranges of a cytotoxic agent where apoptosis or necrosis occurs. This may serve as a starting point for further in-depth characterisation of cytotoxic treatments. © 2017 The Author(s)Published by S. Karger AG, Basel.

  13. The beauty of simple adaptive control and new developments in nonlinear systems stability analysis

    NASA Astrophysics Data System (ADS)

    Barkana, Itzhak

    2014-12-01

    Although various adaptive control techniques have been around for a long time and in spite of successful proofs of stability and even successful demonstrations of performance, the eventual use of adaptive control methodologies in practical real world systems has met a rather strong resistance from practitioners and has remained limited. Apparently, it is difficult to guarantee or even understand the conditions that can guarantee stable operations of adaptive control systems under realistic operational environments. Besides, it is difficult to measure the robustness of adaptive control system stability and allow it to be compared with the common and widely used measure of phase margin and gain margin that is utilized by present, mainly LTI, controllers. Furthermore, customary stability analysis methods seem to imply that the mere stability of adaptive systems may be adversely affected by any tiny deviation from the pretty idealistic and assumably required stability conditions. This paper first revisits the fundamental qualities of customary direct adaptive control methodologies, in particular the classical Model Reference Adaptive Control, and shows that some of their basic drawbacks have been addressed and eliminated within the so-called Simple Adaptive Control methodology. Moreover, recent developments in the stability analysis methods of nonlinear systems show that prior conditions that were customarily assumed to be needed for stability are only apparent and can be eliminated. As a result, sufficient conditions that guarantee stability are clearly stated and lead to similarly clear proofs of stability. As many real-world applications show, once robust stability of the adaptive systems can be guaranteed, the added value of using Add-On Adaptive Control along with classical Control design techniques is pushing the desired performance beyond any previous limits.

  14. Aerosol rebreathing method for assessment of airway abnormalities: theoretical analysis and validation.

    PubMed

    Kim, C S; Brown, L K; Lewars, G G; Sackner, M A

    1983-05-01

    An aerosol rebreathing method which determines total aerosol deposition in the lung by rebreathing non-radioactive inert aerosol was investigated theoretically for its performance characteristics. The method was then validated experimentally by examining a system response to various operating parameters, its reproducibility and convenience in clinical use. It was found from the theoretical analysis that an optimum performance would be achieved by breathing an aerosol of particles 1 micrometer in diameter with a 500-cm3 tidal volume at the breathing rate of 30 breaths/min. With these optimum parameters, experimental results of 10 normals and 10 patients with obstructive airway disease revealed an excellent measurement reproducibility within subjects (+/- 10% from means). There was a wide separation between the two groups in terms of number of rebreathing breaths to reach 90% aerosol deposition (N90) (mean +/- S.E. = 10.8 +/- 1.6 for normals vs. 3.9 +/- 1.1 for patients) and cumulative percentage of aerosol deposition at the fourth breath (AD4) (mean +/- S.E. = 68 +/- 4.4% for normals vs. 90 +/- 3.5% for patients).

  15. Combined Theoretical and Experimental Analysis of Processes Determining Cathode Performance in Solid Oxide Fuel Cells

    SciTech Connect

    Kukla, Maija M.; Kotomin, Eugene Alexej; Merkle, R.; Mastrikov, Yuri; Maier, J.

    2013-02-11

    Solid oxide fuel cells (SOFC) are under intensive investigation since the 1980’s as these devices open the way for ecologically clean direct conversion of the chemical energy into electricity, avoiding the efficiency limitation by Carnot’s cycle for thermochemical conversion. However, the practical development of SOFC faces a number of unresolved fundamental problems, in particular concerning the kinetics of the electrode reactions, especially oxygen reduction reaction. We review recent experimental and theoretical achievements in the current understanding of the cathode performance by exploring and comparing mostly three materials: (La,Sr)MnO3 (LSM), (La,Sr)(Co,Fe)O3 (LSCF) and (Ba,Sr)(Co,Fe)O3 (BSCF). Special attention is paid to a critical evaluation of advantages and disadvantages of BSCF, which shows the best cathode kinetics known so far for oxides. We demonstrate that it is the combined experimental and theoretical analysis of all major elementary steps of the oxygen reduction reaction which allows us to predict the rate determining steps for a given material under specific operational conditions and thus control and improve SOFC performance.

  16. Combined theoretical and experimental analysis of processes determining cathode performance in solid oxide fuel cells.

    PubMed

    Kuklja, M M; Kotomin, E A; Merkle, R; Mastrikov, Yu A; Maier, J

    2013-04-21

    Solid oxide fuel cells (SOFC) are under intensive investigation since the 1980's as these devices open the way for ecologically clean direct conversion of the chemical energy into electricity, avoiding the efficiency limitation by Carnot's cycle for thermochemical conversion. However, the practical development of SOFC faces a number of unresolved fundamental problems, in particular concerning the kinetics of the electrode reactions, especially oxygen reduction reaction. We review recent experimental and theoretical achievements in the current understanding of the cathode performance by exploring and comparing mostly three materials: (La,Sr)MnO3 (LSM), (La,Sr)(Co,Fe)O3 (LSCF) and (Ba,Sr)(Co,Fe)O3 (BSCF). Special attention is paid to a critical evaluation of advantages and disadvantages of BSCF, which shows the best cathode kinetics known so far for oxides. We demonstrate that it is the combined experimental and theoretical analysis of all major elementary steps of the oxygen reduction reaction which allows us to predict the rate determining steps for a given material under specific operational conditions and thus control and improve SOFC performance.

  17. Recent theoretical advances in analysis of AIRS/AMSU sounding data

    NASA Astrophysics Data System (ADS)

    Susskind, Joel

    2007-04-01

    The AIRS Science Team Version 5.0 retrieval algorithm will become operational at the Goddard DAAC in early 2007 in the near real-time analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments are: 1) the development and implementation of a very accurate Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave CO II channel observations in the spectral region 700 cm -1 to 750 cm -1 are used exclusively for cloud clearing purposes, while shortwave CO II channels in the spectral region 2195 cm -1 to 2395 cm -1 are used for temperature sounding purposes. The new methodology is described briefly and results are shown, including comparison with those using AIRS Version 4, as well as a forecast impact experiment assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System.

  18. Predominant information quality scheme for the essential amino acids: an information-theoretical analysis.

    PubMed

    Esquivel, Rodolfo O; Molina-Espíritu, Moyocoyani; López-Rosa, Sheila; Soriano-Correa, Catalina; Barrientos-Salcedo, Carolina; Kohout, Miroslav; Dehesa, Jesús S

    2015-08-24

    In this work we undertake a pioneer information-theoretical analysis of 18 selected amino acids extracted from a natural protein, bacteriorhodopsin (1C3W). The conformational structures of each amino acid are analyzed by use of various quantum chemistry methodologies at high levels of theory: HF, M062X and CISD(Full). The Shannon entropy, Fisher information and disequilibrium are determined to grasp the spatial spreading features of delocalizability, order and uniformity of the optimized structures. These three entropic measures uniquely characterize all amino acids through a predominant information-theoretic quality scheme (PIQS), which gathers all chemical families by means of three major spreading features: delocalization, narrowness and uniformity. This scheme recognizes four major chemical families: aliphatic (delocalized), aromatic (delocalized), electro-attractive (narrowed) and tiny (uniform). All chemical families recognized by the existing energy-based classifications are embraced by this entropic scheme. Finally, novel chemical patterns are shown in the information planes associated with the PIQS entropic measures.

  19. Theoretical analysis and experimental study on the PV-IESAHP system

    NASA Astrophysics Data System (ADS)

    Wu, Xingying

    2017-05-01

    Solar photovoltaic/thermal integration (PV/T) and heat pump are very important techniques for energy conservation and carbon reduction. A practical design for an integrated PV/T and heat pump system is presented, the integrated system is called the Photovoltaic-Indirect Expansion Solar Assisted Heat Pump (PV-IESAHP) system, which uses the heat energy supplied by PV/T as an evaporating heat source and can achieve high coefficient of performance(COP). It is essential to master the completed design idea and thermodynamic analysis method for the integration of PV/T and heat pump. In this study a kind of theoretical and matching performance model of the PV-IESAHP system is established, and a series of experiments were conducted to study the performance of the heat-pipe type PV/T experimental system and heat hump experimental system. Moreover, energy and energy analyses were used to investigate the performance of the systems. The results show that the COP of the PV-IESAHP system is about 4.0, and the energy efficiency about 0.05, which not only implies that the PV-IESAHP system has a significant energy-saving potential, but also will provide a theoretical basis for the application and parameters design of the PV-IESAHP system in the future.

  20. A game theoretical analysis of the mating sign behavior in the honey bee.

    PubMed

    Wilhelm, M; Chhetri, M; Rychtář, J; Rueppell, O

    2011-03-01

    Queens of the honey bee, Apis mellifera (L.), exhibit extreme polyandry, mating with up to 45 different males (drones). This increases the genetic diversity of their colonies, and consequently their fitness. After copulation, drones leave a mating sign in the genital opening of the queen which has been shown to promote additional mating of the queen. On one hand, this signing behavior is beneficial for the drone because it increases the genetic diversity of the resulting colony that is to perpetuate his genes. On the other hand, it decreases the proportion of the drone's personal offspring among colony members which is reducing drone fitness. We analyze the adaptiveness and evolutionary stability of this drone's behavior with a game-theoretical model. We find that theoretically both the strategy of leaving a mating sign and the strategy of not leaving a mating sign can be evolutionary stable, depending on natural parameters. However, the signing strategy is not favored for most scenarios, including the cases that are biologically plausible in reference to empirical data. We conclude that leaving a sign is not in the interest of the drone unless it serves biological functions other than increasing subsequent queen mating chances. Nevertheless, our analysis can also explain the prevalence of such a behavior of honey bee drones by a very low evolutionary pressure for an invasion of the nonsigning strategy.

  1. Theoretical Analysis of Triple Liquid Stub Tuner Impedance Matching for ICRH on Tokamaks

    NASA Astrophysics Data System (ADS)

    Du, Dan; Gong, Xueyu; Yin, Lan; Xiang, Dong; Li, Jingchun

    2015-12-01

    The impedance matching is crucial for continuous wave operation of ion cyclotron resonance heating (ICRH) antennae with high power injection into plasmas. A sudden increase in the reflected radio frequency power due to an impedance mismatch of the ICRH system is an issue which must be solved for present-day and future fusion reactors. This paper presents a method for theoretical analysis of ICRH system impedance matching for a triple liquid stub tuner under plasma operational conditions. The relationship of the antenna input impedance with the plasma parameters and operating frequency is first obtained using a global solution. Then, the relations of the plasma parameters and operating frequency with the matching liquid heights are indirectly obtained through numerical simulation according to transmission line theory and matching conditions. The method provides an alternative theoretical method, rather than measurements, to study triple liquid stub tuner impedance matching for ICRH, which may be beneficial for the design of ICRH systems on tokamaks. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2014GB108002, 2013GB107001), National Natural Science Foundation of China (Nos. 11205086, 11205053, 11375085, and 11405082), the Construct Program of Fusion and Plasma Physics Innovation Team in Hunan Province, China (No. NHXTD03), the Natural Science Foundation of Hunan Province, China (No. 2015JJ4044)

  2. A framework for biodynamic feedthrough analysis--part I: theoretical foundations.

    PubMed

    Venrooij, Joost; van Paassen, Marinus M; Mulder, Mark; Abbink, David A; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H

    2014-09-01

    Biodynamic feedthrough (BDFT) is a complex phenomenon, which has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, a framework for biodynamic feedthrough analysis is presented. The goal of this framework is two-fold. First, it provides some common ground between the seemingly large range of different approaches existing in the BDFT literature. Second, the framework itself allows for gaining new insights into BDFT phenomena. It will be shown how relevant signals can be obtained from measurement, how different BDFT dynamics can be derived from them, and how these different dynamics are related. Using the framework, BDFT can be dissected into several dynamical relationships, each relevant in understanding BDFT phenomena in more detail. The presentation of the BDFT framework is divided into two parts. This paper, Part I, addresses the theoretical foundations of the framework. Part II, which is also published in this issue, addresses the validation of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation.

  3. Limitations of the spike-triggered averaging for estimating motor unit twitch force: a theoretical analysis.

    PubMed

    Negro, Francesco; Yavuz, Ş Utku; Yavuz, Utku Ş; Farina, Dario

    2014-01-01

    Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.

  4. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    NASA Astrophysics Data System (ADS)

    Nadiga, B. T.; Urban, N. M.

    2016-12-01

    Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using

  5. HackAttack: Game-Theoretic Analysis of Realistic Cyber Conflicts

    SciTech Connect

    Ferragut, Erik M; Brady, Andrew C; Brady, Ethan J; Ferragut, Jacob M; Ferragut, Nathan M; Wildgruber, Max C

    2016-01-01

    Game theory is appropriate for studying cyber conflict because it allows for an intelligent and goal-driven adversary. Applications of game theory have led to a number of results regarding optimal attack and defense strategies. However, the overwhelming majority of applications explore overly simplistic games, often ones in which each participant s actions are visible to every other participant. These simplifications strip away the fundamental properties of real cyber conflicts: probabilistic alerting, hidden actions, unknown opponent capabilities. In this paper, we demonstrate that it is possible to analyze a more realistic game, one in which different resources have different weaknesses, players have different exploits, and moves occur in secrecy, but they can be detected. Certainly, more advanced and complex games are possible, but the game presented here is more realistic than any other game we know of in the scientific literature. While optimal strategies can be found for simpler games using calculus, case-by-case analysis, or, for stochastic games, Q-learning, our more complex game is more naturally analyzed using the same methods used to study other complex games, such as checkers and chess. We define a simple evaluation function and employ multi-step searches to create strategies. We show that such scenarios can be analyzed, and find that in cases of extreme uncertainty, it is often better to ignore one s opponent s possible moves. Furthermore, we show that a simple evaluation function in a complex game can lead to interesting and nuanced strategies.

  6. Theoretical and experimental examination of SFG polarization analysis at acetonitrile-water solution surfaces.

    PubMed

    Saito, Kengo; Peng, Qiling; Qiao, Lin; Wang, Lin; Joutsuka, Tatsuya; Ishiyama, Tatsuya; Ye, Shen; Morita, Akihiro

    2017-03-16

    Sum frequency generation (SFG) spectroscopy is widely used to observe molecular orientation at interfaces through a combination of various types of polarization. The present work thoroughly examines the relation between the polarization dependence of SFG signals and the molecular orientation, by comparing SFG measurements and molecular dynamics (MD) simulations of acetonitrile/water solutions. The present SFG experiment and MD simulations yield quite consistent results on the ratios of χ((2)) elements, supporting the reliability of both means. However, the subsequent polarization analysis tends to derive more upright tilt angles of acetonitrile than the direct MD calculations. The reasons for discrepancy are examined in terms of three issues; (i) anisotropy of the Raman tensor, (ii) cross-correlation, and (iii) orientational distribution. The analysis revealed that the issues (i) and (iii) are the main causes of errors in the conventional polarization analysis of SFG spectra. In methyl CH stretching, the anisotropy of Raman tensor cannot be estimated from the simple bond polarizability model. The neglect of the orientational distribution is shown to systematically underestimate the tilt angle of acetonitrile. Further refined use of polarization analysis in collaboration with MD simulations should be proposed.

  7. Theoretical analysis and simulation study of a high-resolution zoom-in PET system.

    PubMed

    Zhou, Jian; Qi, Jinyi

    2009-09-07

    We study a novel PET system that integrates a high-resolution zoom-in detector into an existing PET scanner to provide higher resolution and sensitivity in a target region. In contrast to a full-ring PET insert, the proposed system is designed to focus on the target region close to the face of the high-resolution detector. The proposed design is easier to implement than a full-ring insert and provides flexibility for adaptive PET imaging. We developed a maximum a posteriori (MAP) image reconstruction method for the proposed system. Theoretical analysis of the resolution and noise properties of the MAP reconstruction is performed. We show that the proposed PET system offers better performance in terms of resolution-noise tradeoff and lesion detectability. The results are validated using computer simulations.

  8. Theoretical analysis of AC electric field transmission into biological tissue through frozen saline for electroporation.

    PubMed

    Xiao, Chunyan; Rubinsky, Boris

    2014-12-01

    An analytical model was used to explore the feasibility of sinusoidal electric field transmission across a frozen saline layer into biological tissue. The study is relevant to electroporation and permeabilization of the cell membrane by electric fields. The concept was analyzed for frequencies in the range of conventional electroporation frequencies and electric field intensity. Theoretical analysis for a variety of tissues show that the transmission of electroporation type electric fields through a layer of frozen saline into tissue is feasible and the behavior of this composite system depends on tissue type, frozen domain temperature, and frequency. Freezing could become a valuable method for adherence of electroporation electrodes to moving tissue surfaces, such as the heart in the treatment of atrial fibrillation or blood vessels for the treatment of restenosis. © 2014 Wiley Periodicals, Inc.

  9. Theoretical Analysis of Heat Pump Cycle Characteristics with Pure Refrigerants and Binary Refrigerant Mixtures

    NASA Astrophysics Data System (ADS)

    Kagawa, Noboru; Uematsu, Masahiko; Watanabe, Koichi

    In recent years there has been an increasing interest of the use of nonazeotropic binary mixtures to improve performance in heat pump systems, and to restrict the consumption of chlorofluorocarbon (CFC) refrigerants as internationally agreed-upon in the Montreal Protocol. However, the available knowledge on the thermophysical properties of mixtures is very much limited particularly with respect to quantitative information. In order to systematize cycle performance with Refrigerant 12 (CCl2F2) + Refrigerant 22 (CHClF2) and Refrigerant 22 + Refrigerant 114 (CClF2-CClF2) systems which are technically important halogenated refrigerant mixtures, the heat pump cycle analysis in case of using these mixtures was theoretically studied. It became clear that the maximum coefficients of performance with various pure refrigerants and binary refrigerant mixtures were obtained at the reduced condensing temperature being 0.9 when the same temperature difference between condensing and evaporating temperature was chosen.

  10. Pipette aspiration of hyperelastic compliant materials: Theoretical analysis, simulations and experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Man-Gong; Cao, Yan-Ping; Li, Guo-Yang; Feng, Xi-Qiao

    2014-08-01

    This paper explores the pipette aspiration test of hyperelastic compliant materials. Explicit expressions of the relationship between the imposed pressure and the aspiration length are developed, which serve as fundamental relations to deduce the material parameters from experimental responses. Four commonly used hyperelastic constitutive models, e.g. neo-Hookean, Mooney-Rivlin, Fung, and Arruda-Boyce models, are investigated. Through dimensional analysis and nonlinear finite element simulations, we establish the relations between the experimental responses and the constitutive parameters of hyperelastic materials in explicit form, upon which inverse approaches for determining the hyperelastic properties of materials are developed. The reliability of the results given by the proposed methods has been verified both theoretically and numerically. Experiments have been carried out on an elastomer (polydimethylsiloxane, 1:50) and porcine liver to validate the applicability of the inverse approaches in practical measurements.

  11. Systems theoretic analysis of the central dogma of molecular biology: some recent results.

    PubMed

    Gao, Rui; Yu, Juanyi; Zhang, Mingjun; Tarn, Tzyh-Jong; Li, Jr-Shin

    2010-03-01

    This paper extends our early study on a mathematical formulation of the central dogma of molecular biology, and focuses discussions on recent insights obtained by employing advanced systems theoretic analysis. The goal of this paper is to mathematically represent and interpret the genetic information flow at the molecular level, and explore the fundamental principle of molecular biology at the system level. Specifically, group theory was employed to interpret concepts and properties of gene mutation, and predict backbone torsion angle along the peptide chain. Finite state machine theory was extensively applied to interpret key concepts and analyze the processes related to DNA hybridization. Using the proposed model, we have transferred the character-based model in molecular biology to a sophisticated mathematical model for calculation and interpretation.

  12. Theoretical analysis of electronic absorption spectra of vitamin B12 models

    NASA Astrophysics Data System (ADS)

    Andruniow, Tadeusz; Kozlowski, Pawel M.; Zgierski, Marek Z.

    2001-10-01

    Time-dependent density-functional theory (TD-DFT) is applied to analyze the electronic absorption spectra of vitamin B12. To accomplish this two model systems were considered: CN-[CoIII-corrin]-CN (dicyanocobinamide, DCC) and imidazole-[CoIII-corrin]-CN (cyanocobalamin, ImCC). For both models 30 lowest excited states were calculated together with transition dipole moments. When the results of TD-DFT calculations were directly compared with experiment it was found that the theoretical values systematically overestimate experimental data by approximately 0.5 eV. The uniform adjustment of the calculated transition energies allowed detailed analysis of electronic absorption spectra of vitamin B12 models. All absorption bands in spectral range 2.0-5.0 eV were readily assigned. In particular, TD-DFT calculations were able to explain the origin of the shift of the lowest absorption band caused by replacement of the-CN axial ligand by imidazole.

  13. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    SciTech Connect

    Shafii, Mohammad Ali Meidianti, Rahma Wildian, Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  14. A theoretical basis for the analysis of multiversion software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  15. Parallel Path Magnet Motor: Development of the Theoretical Model and Analysis of Experimental Results

    NASA Astrophysics Data System (ADS)

    Dirba, I.; Kleperis, J.

    2011-01-01

    Analytical and numerical modelling is performed for the linear actuator of a parallel path magnet motor. In the model based on finite-element analysis, the 3D problem is reduced to a 2D problem, which is sufficiently precise in a design aspect and allows modelling the principle of a parallel path motor. The paper also describes a relevant numerical model and gives comparison with experimental results. The numerical model includes all geometrical and physical characteristics of the motor components. The magnetic flux density and magnetic force are simulated using FEMM 4.2 software. An experimental model has also been developed and verified for the core of switchable magnetic flux linear actuator and motor. The results of experiments are compared with those of theoretical/analytical and numerical modelling.

  16. Disturbed connectivity of EEG functional networks in alcoholism: a graph-theoretic analysis.

    PubMed

    Cao, Rui; Wu, Zheng; Li, Haifang; Xiang, Jie; Chen, Junjie

    2014-01-01

    Generally, an alcoholic's brain shows explicit damage. However, in cognitive tasks, the correlation between the topological structural changes of the brain networks and the brain damage is still unclear. Scalp electrodes and synchronization likelihood (SL) were applied to the constructions of the EGG functional networks of 28 alcoholics and 28 healthy volunteers. The graph-theoretic analysis showed that in cognitive tasks, compared with the healthy control group, the brain networks of alcoholics had smaller clustering coefficients in β1 bands, shorter characteristic path lengths, increased global efficiency, but similar small-world properties. The abnormal topological structure of the alcoholics may be related to the local-function brain damage and the compensation mechanism adopted to complete tasks. This conclusion provides a new perspective for alcohol-related brain damage.

  17. Theoretical analysis for the specific heat and thermal parameters of solid C60

    NASA Astrophysics Data System (ADS)

    Soto, J. R.; Calles, A.; Castro, J. J.

    1997-08-01

    We present the results of a theoretical analysis for the thermal parameters and phonon contribution to the specific heat in solid C60. The phonon contribution to the specific heat is calculated through the solution of the corresponding dynamical matrix, for different points in the Brillouin zone, and the construccion of the partial and generalized phonon density of states. The force constants are obtained from a first principle calculation, using a SCF Hartree-Fock wave function from the Gaussian 92 program. The thermal parameters reported are the effective temperatures and vibrational amplitudes as a function of temperature. Using this model we present a parametization scheme in order to reproduce the general behaviour of the experimental specific heat for these materials.

  18. [A theoretical analysis of coordination in the field of health care: application to coordinated care systems].

    PubMed

    Sebai, Jihane

    2016-01-01

    Various organizational, functional or structural issues have led to a review of the foundations of the former health care system based on a traditional market segmentation between general practice and hospital medicine, and between health and social sectors and marked by competition between private and public sectors. The current reconfiguration of the health care system has resulted in “new” levers explained by the development of a new organizational reconfiguration of the primary health care model. Coordinated care structures (SSC) have been developed in this context by making coordination the cornerstone of relations between professionals to ensure global, continuous and quality health care. This article highlights the contributions of various theoretical approaches to the understanding of the concept of coordination in the analysis of the current specificity of health care.

  19. Experimental and theoretical analysis of particle entrainment in dry, uniform-unsteady granular flows

    NASA Astrophysics Data System (ADS)

    Larcher, Michele; Fraccarollo, Luigi; Prati, Anna

    2017-04-01

    flow evolving in time, but without any variability in longitudinal direction, apart from some disturbances localized at the extremities of the flume. Using a high speed camera and particle tracking algorithms, we measured the time evolution of the flow depth, of the normal-to-bed velocity profile and of the granular concentration and compared them with the predictions of a simple theory. We solved for the time evolution of the momentum balance in the flow direction, assuming the self-similarity of the velocity profile, having the same shape than in a fully developed flow. This was obtained from the algebraic integration of an extended kinetic theory for a dense collisional flow of dissipative spheres. The concentration was assumed in first approximation to be constant throughout the flow depth. Despite the very simple theoretical approach, we found a good agreement between the predictions for the time evolution of the velocity profile and of the flow depth and the experimental measures. The proposed model relies only on measured physical properties of the particles, without any ad hoc calibrated parameter.

  20. Theoretical analysis of tablet hardness prediction using chemoinformetric near-infrared spectroscopy.

    PubMed

    Tanabe, Hideaki; Otsuka, Kuniko; Otsuka, Makoto

    2007-07-01

    In order to clarify the theoretical basis of the variability in the measurement of tablet hardness by compression pressure, NIR spectroscopic methods were used to predict tablet hardness of the formulations. Tablets (200 mg, 8 mm in diameter) consisting of berberine chloride, lactose, and potato starch were formed at various compression pressures (59, 78, 98, 127, 195 MPa). The hardness and the distribution of micropores were measured. The reflectance NIR spectra of various compressed tablets were used as a calibration set to establish a calibration model to predict tablet hardness by principal component regression (PCR) analysis. The distribution of micropores was shifted to a smaller pore size with increasing compression pressure. The total pore volume of tablets decreased as the compression pressure increased. The hardness increased as the compression pressure increased. The hardness could be predicted using a calibration model consisting of 7 principal components (PCs) obtained by PCR. The relationship between the predicted and the actual hardness values exhibited a straight line, an R(2) of 0.925. In order to understand the theoretical analysis (scientific background) of calibration models used to evaluate tablet hardness, the standard error of cross validation (SEV) values, the loading vectors of each PC and the regression vector were investigated. The result obtained with the calibration models for hardness suggested that the regression vector might involve physical and chemical factors. In contrast, the porosity could be predicted using a calibration model composed of 2 PCs. The relationship between the predicted and the actual total pore volume showed a straight line with R(2) = 0.801. The regression vector of the total pore volume might be due to physical factors.

  1. Theoretical analysis of selectivity mechanisms in molecular transport through channels and nanopores

    SciTech Connect

    Agah, Shaghayegh; Pasquali, Matteo; Kolomeisky, Anatoly B.

    2015-01-28

    Selectivity is one of the most fundamental concepts in natural sciences, and it is also critically important in various technological, industrial, and medical applications. Although there are many experimental methods that allow to separate molecules, frequently they are expensive and not efficient. Recently, a new method of separation of chemical mixtures based on utilization of channels and nanopores has been proposed and successfully tested in several systems. However, mechanisms of selectivity in the molecular transport during the translocation are still not well understood. Here, we develop a simple theoretical approach to explain the origin of selectivity in molecular fluxes through channels. Our method utilizes discrete-state stochastic models that take into account all relevant chemical transitions and can be solved analytically. More specifically, we analyze channels with one and two binding sites employed for separating mixtures of two types of molecules. The effects of the symmetry and the strength of the molecular-pore interactions are examined. It is found that for one-site binding channels, the differences in the strength of interactions for two species drive the separation. At the same time, in more realistic two-site systems, the symmetry of interaction potential becomes also important. The most efficient separation is predicted when the specific binding site is located near the entrance to the nanopore. In addition, the selectivity is higher for large entrance rates into the channel. It is also found that the molecular transport is more selective for repulsive interactions than for attractive interactions. The physical-chemical origin of the observed phenomena is discussed.

  2. Graph theoretical analysis of resting-state MEG data: Identifying interhemispheric connectivity and the default mode.

    PubMed

    Maldjian, Joseph A; Davenport, Elizabeth M; Whitlow, Christopher T

    2014-08-01

    Interhemispheric connectivity with resting state MEG has been elusive, and demonstration of the default mode network (DMN) yet more challenging. Recent seed-based MEG analyses have shown interhemispheric connectivity using power envelope correlations. The purpose of this study is to compare graph theoretic maps of brain connectivity generated using MEG with and without signal leakage correction to evaluate for the presence of interhemispheric connectivity. Eight minutes of resting state eyes-open MEG data were obtained in 22 normal male subjects enrolled in an IRB-approved study (ages 16-18). Data were processed using an in-house automated MEG processing pipeline and projected into standard (MNI) source space at 7mm resolution using a scalar beamformer. Mean beta-band amplitude was sampled at 2.5second epochs from the source space time series. Leakage correction was performed in the time domain of the source space beam formed signal prior to amplitude transformation. Graph theoretic voxel-wise source space correlation connectivity analysis was performed for leakage corrected and uncorrected data. Degree maps were thresholded across subjects for the top 20% of connected nodes to identify hubs. Additional degree maps for sensory, visual, motor, and temporal regions were generated to identify interhemispheric connectivity using laterality indices. Hubs for the uncorrected MEG networks were predominantly symmetric and midline, bearing some resemblance to fMRI networks. These included the cingulate cortex, bilateral inferior frontal lobes, bilateral hippocampal formations and bilateral cerebellar hemispheres. These uncorrected networks however, demonstrated little to no interhemispheric connectivity using the ROI-based degree maps. Leakage corrected MEG data identified the DMN, with hubs in the posterior cingulate and biparietal areas. These corrected networks demonstrated robust interhemispheric connectivity for the ROI-based degree maps. Graph theoretic analysis of

  3. convISA: A simple, convoluted method for isotopomer spectral analysis of fatty acids and cholesterol.

    PubMed

    Tredwell, Gregory D; Keun, Hector C

    2015-11-01

    Isotopomer spectral analysis (ISA) is a simple approach for modelling the cellular synthesis of fatty acids and cholesterol in a stable isotope labelling experiment. In the simplest model, fatty acid biosynthesis is described by two key parameters: the fractional enrichment of acetyl-CoA from the labelled substrate, D, and the fractional de novo synthesis of the fatty acid during the exposure to the labelled substrate, g(t). The model can also be readily extended to include synthesis via elongation of unlabelled shorter fatty acids. This modelling strategy is less complex than metabolic flux analysis and only requires the measurement of the mass isotopologues of a single metabolite. However, software tools to perform these calculations are not freely available. We have developed an algorithm (convISA), implemented in MATLAB(™), which employs the convolution (Cauchy product) of mass isotopologue distributions (MIDs) for ISA of fatty acids and cholesterol. In our method, the MIDs of each molecule are constructed as a single entity rather than deriving equations for individual isotopologues. The flexibility of this method allows the model to be applied to raw data as well as to data that has been corrected for natural isotope abundance. To test the algorithm, convISA was applied to 238 MIDs of methyl palmitate available from the literature, for which ISA parameters had been calculated via other methods. A very high correlation was observed between estimates of the D and g(t) parameters from convISA with both published values, and estimates generated by our own metabolic flux analysis using a simplified stoichiometric model (r=0.981 and 0.944, and 0.996 and 0.942). We also demonstrate the application of the convolution ISA approach to cholesterol biosynthesis; the model was applied to measurements made on MCF7 cells cultured in U-(13)C-glucose. In conclusion, we believe that convISA offers a convenient, flexible and transparent framework for metabolic modelling that

  4. Theoretical analysis of rotating two phase detonation in a rocket motor

    NASA Technical Reports Server (NTRS)

    Shen, I.; Adamson, T. C., Jr.

    1973-01-01

    Tangential mode, non-linear wave motion in a liquid propellant rocket engine is studied, using a two phase detonation wave as the reaction model. Because the detonation wave is followed immediately by expansion waves, due to the side relief in the axial direction, it is a Chapman-Jouguet wave. The strength of this wave, which may be characterized by the pressure ratio across the wave, as well as the wave speed and the local wave Mach number, are related to design parameters such as the contraction ratio, chamber speed of sound, chamber diameter, propellant injection density and velocity, and the specific heat ratio of the burned gases. In addition, the distribution of flow properties along the injector face can be computed. Numerical calculations show favorable comparison with experimental findings. Finally, the effects of drop size are discussed and a simple criterion is found to set the lower limit of validity of this strong wave analysis.

  5. Topological analysis of electron density and the electrostatic properties of isoniazid: an experimental and theoretical study.

    PubMed

    Rajalakshmi, Gnanasekaran; Hathwar, Venkatesha R; Kumaradhas, Poomani

    2014-04-01

    Isoniazid (isonicotinohydrazide) is an important first-line antitubercular drug that targets the InhA enzyme which synthesizes the critical component of the mycobacterial cell wall. An experimental charge-density analysis of isoniazid has been performed to understand its structural and electronic properties in the solid state. A high-resolution single-crystal X-ray intensity data has been collected at 90 K. An aspherical multipole refinement was carried out to explore the topological and electrostatic properties of the isoniazid molecule. The experimental results were compared with the theoretical charge-density calculations performed using CRYSTAL09 with the B3LYP/6-31G** method. A topological analysis of the electron density reveals that the Laplacian of electron density of the N-N bond is significantly less negative, which indicates that the charges at the b.c.p. (bond-critical point) of the bond are least accumulated, and so the bond is considered to be weak. As expected, a strong negative electrostatic potential region is present in the vicinity of the O1, N1 and N3 atoms, which are the reactive locations of the molecule. The C-H···N, C-H···O and N-H···N types of intermolecular hydrogen-bonding interactions stabilize the crystal structure. The topological analysis of the electron density on hydrogen bonding shows the strength of intermolecular interactions.

  6. Synthesis, spectroscopic analysis and theoretical study of new pyrrole-isoxazoline derivatives

    NASA Astrophysics Data System (ADS)

    Rawat, Poonam; Singh, R. N.; Baboo, Vikas; Niranjan, Priydarshni; Rani, Himanshu; Saxena, Rajat; Ahmad, Sartaj

    2017-02-01

    In the present work, we have efficiently synthesized the pyrrole-isoxazoline derivatives (4a-d) by cyclization of substituted 4-chalconylpyrrole (3a-d) with hydroxylamine hydrochloride. The reactivity of substituted 4-chalconylpyrrole (3a-d), towards nucleophiles hydroxylamine hydrochloride was evaluated on the basis of electrophilic reactivity descriptors (fk+, sk+, ωk+) and they were found to be high at unsaturated β carbon of chalconylpyrrole indicating its more proneness to nucleophilic attack and thereby favoring the formation of reported new pyrrole-isoxazoline compounds (4a-d). The structures of newly synthesized pyrrole-isoxazoline derivatives were derived from IR, 1H NMR, Mass, UV-Vis and elemental analysis. All experimental spectral data corroborate well with the calculated spectral data. The FT-IR analysis shows red shifts in vN-H and vC = O stretching due to dimer formation through intermolecular hydrogen bonding. On basis set superposition error correction, the intermolecular interaction energy for (4a-d) is found to be 10.10, 9.99, 10.18, 11.01 and 11.19 kcal/mol respectively. The calculated first hyperpolarizability (β0) values of (4a-d) molecules are in the range of 7.40-9.05 × 10-30 esu indicating their suitability for non-linear optical (NLO) applications. Experimental spectral results, theoretical data, analysis of chalcone intermediates and pyrrole-isoxazolines find usefulness in advancement of pyrrole-azole chemistry.

  7. Data Analysis and Theoretical Studies of the Upper Mesosphere and Lower Thermosphere

    NASA Technical Reports Server (NTRS)

    Burns, Alan; Killeen, Timothy L.

    1996-01-01

    Three separate tasks were proposed under this award. The first involved extending our continuing study of electrodynamical feedback between the thermosphere/ionosphere and the magnetosphere. The second was a model-experiment comparison study of global dynamics and the third was a 'spectral energetics' analysis of tidal dissipation and energy exchange mechanisms. The Earth's mesosphere and lower-thermosphere/ionosphere (MLTI), between approximately 60 and 180 km altitude, is the most poorly understood region of the Earth's atmosphere, primarily because of its relative inaccessibility. This lack of knowledge has been widely recognized and has provided important scientific rationale for the upcoming NASA TIMED mission. While the data gathered during the TIMED era will revolutionize our understanding of the MLTI region, much work can be done prior to the mission, both to develop data-analysis and modeling techniques and to study the more limited relevant experimental data from previous missions. The grant reported on here continues and extends an existing successful program of scientific research into the energetics, dynamics and electrodynamics of the MLTI, using available theoretical and data analysis tools.

  8. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  9. Survey and analysis of simple sequence repeats (SSRs) in three genomes of Candida species.

    PubMed

    Jia, Dongmei

    2016-06-15

    Simple sequence repeats (SSRs) or microsatellites, which composed of tandem repeated short units of 1-6 bp, have been paying attention continuously. Here, the distribution, composition and polymorphism of microsatellites and compound microsatellites were analyzed in three available genomes of Candida species (Candida dubliniensis, Candida glabrata and Candida orthopsilosis). The results show that there were 118,047, 66,259 and 61,119 microsatellites in genomes of C. dubliniensis, C. glabrata and C. orthopsilosis, respectively. The SSRs covered more than 1/3 length of genomes in the three species. The microsatellites, which just consist of bases A and (or) T, such as (A)n, (T)n, (AT)n, (TA)n, (AAT)n, (TAA)n, (TTA)n, (ATA)n, (ATT)n and (TAT)n, were predominant in the three genomes. The length of microsatellites was focused on 6 bp and 9 bp either in the three genomes or in its coding sequences. What's more, the relative abundance (19.89/kbp) and relative density (167.87 bp/kbp) of SSRs in sequence of mitochondrion of C. glabrata were significantly great than that in any one of genomes or chromosomes of the three species. In addition, the distance between any two adjacent microsatellites was an important factor to influence the formation of compound microsatellites. The analysis may be helpful for further studying the roles of microsatellites in genomes' origination, organization and evolution of Candida species. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Palaeomagnetic analysis of plunging fold structures: Errors and a simple fold test

    NASA Astrophysics Data System (ADS)

    Stewart, Simon A.

    1995-02-01

    The conventional corrections for bedding dip in palaeomagnetic studies involve either untilting about strike or about some inclined axis—the choice is usually governed by the perceived fold hinge orientation. While it has been recognised that untilting bedding about strike can be erroneous if the beds lie within plunging fold structures, there are several types of fold which have plunging hinges, but whose limbs have rotated about horizontal axes. Examples are interference structures and forced folds; restoration about inclined axes may be incorrect in these cases. The angular errors imposed upon palaeomagnetic lineation data via the wrong choice of rotation axis during unfolding are calculated here and presented for lineations in any orientation which could be associated with an upright, symmetrical fold. This extends to palaeomagnetic data previous analyses which were relevant to bedding-parallel lineations. This numerical analysis highlights the influence of various parameters which describe fold geometry and relative lineation orientation upon the angular error imparted to lineation data by the wrong unfolding method. The effect of each parameter is described, and the interaction of the parameters in producing the final error is discussed. Structural and palaeomagnetic data are cited from two field examples of fold structures which illustrate the alternative kinematic histories. Both are from thin-skinned thrust belts, but the data show that one is a true plunging fold, formed by rotation about its inclined hinge, whereas the other is an interference structure produced by rotation of the limbs about non-parallel horizontal axes. Since the angle between the palaeomagnetic lineations and the inclined fold hinge is equal on both limbs in the former type of structure, but varies from limb to limb in the latter, a simple test can be defined which uses palaeomagnetic lineation data to identify rotation axes and hence fold type. This test can use pre- or syn

  11. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    PubMed Central

    2011-01-01

    Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107

  12. Inter simple sequence repeat (ISSR) analysis of genetic diversity in tef [Eragrostis tef (Zucc.) Trotter].

    PubMed

    Assefa, Kebebew; Merker, Arnulf; Tefera, Hailu

    2003-01-01

    The DNA polymorphism among 92 selected tef genotypes belonging to eight origin groups was assessed using eight inter simple sequence repeat (ISSR) primers. The objectives were to examine the possibility of using ISSR markers for unravelling genetic diversity in tef, and to assess the extent and pattern of genetic diversity in the test germplasm with respect to origin groups. The eight primers were able to separate or distinguish all of the 92 tef genotypes based on a total of 110 polymorphic bands among the test lines. The Jaccard similarity coefficient among the test genotypes ranged from 0.26 to 0.86, and at about 60 % similarity level the clustering of this matrix using the unweighted pair-group method based on arithmetic average (UPGMA) resulted in the formation of six major clusters of 2 to 37 lines with further eight lines remaining ungrouped. The standardized Nei genetic distance among the eight groups of origin ranged between 0.03 and 0.32. The UPGMA clustering using the standardized genetic distance matrix resulted in the identification of three clusters of the eight groups of origin with bootstrap values ranging from 56 to 97. The overall mean Shannon Weaver diversity index of the test lines was 0.73, indicating better resolution of genetic diversity in tef with ISSR markers than with phenotypic (morphological) traits used in previous studies. This can be attributed mainly to the larger number of loci generated for evaluation with ISSR analysis as compared to the few number of phenotypic traits amenable for assessment and which are further greatly affected by environment and genotype x environment interaction. Analysis of variance of mean Shannon Weaver diversity indices revealed substantial (P < or = 0.05) variation in the level of diversity among the eight groups of origin. In conclusion, our results indicate that ISSR can be useful as DNA-based molecular markers for studying genetic diversity and phylogenetic relationships, DNA fingerprinting for the

  13. Control Theoretic Approach to Iterative Methods for Large-scale Toeplitz-type Systems with Application to Magnetic Field Analysis

    NASA Astrophysics Data System (ADS)

    Oda, Tomohito; Kashima, Kenji; Imura, Jun-Ichi; Miyazaki, Shuji; Morita, Hiroshi

    In this paper, stationary iterative methods for large-scale Toeplitz-type systems are investigated from a control theoretic point of view. We utilize spatially invariant structure of Toeplitz matrices, to avoid the curse of dimensionality arising in analysis and design of the convergence properties. Nonlinearities in the system are theoretically handled within the small gain and stability analysis for Lur'e systems. This theory enables us to achieve the desired global convergence of the proposed numerical scheme. We also evaluate the efficacy of the proposed method through an application to magnetic field analysis.

  14. Genome wide characterization of simple sequence repeats in watermelon genome and their application in comparative mapping and genetic diversity analysis

    USDA-ARS?s Scientific Manuscript database

    Simple sequence repeats (SSR) or microsatellite markers are one of the most informative and versatile DNA-based markers. The use of next-generation sequencing technologies allow whole genome sequencing and make it possible to develop large numbers of SSRs through bioinformatic analysis of genome da...

  15. Is BAMM Flawed? Theoretical and Practical Concerns in the Analysis of Multi-Rate Diversification Models.

    PubMed

    Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan

    2017-07-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for

  16. Site-city interaction: theoretical, numerical and experimental crossed-analysis

    NASA Astrophysics Data System (ADS)

    Schwan, L.; Boutin, C.; Padrón, L. A.; Dietz, M. S.; Bard, P.-Y.; Taylor, C.

    2016-05-01

    The collective excitation of city structures by a seismic wavefield and the subsequent multiple Structure-Soil-Structure Interactions (SSSIs) between the buildings are usually disregarded in conventional seismology and earthquake engineering practice. The objective here is to qualify and quantify these complex multiple SSSIs through the design of an elementary study case, which serves as a benchmark for theoretical, numerical and experimental crossed-analysis. The experimental specimen consists of an idealized site-city setup with up to 37 anisotropic resonant structures arranged at the top surface of an elastic layer and in co-resonance with it. The experimental data from shaking table measurements is compared with the theoretical and numerical results provided respectively by an equivalent city-impedance model derived analytically from homogenization in the long-wavelength approximation and a model based on boundary elements. The signatures of the site-city interactions are identified in the frequency, time and space domain, and in particular consist of a frequency-dependent free/rigid switch in the surface condition at the city resonance, beatings in the records and the depolarization of the wavefield. A parametric study on the city density shows that multiple SSSIs among the city structures (five are sufficient) can have significant effects on both the seismic response of its implantation site and that of the buildings. Key parameters are provided to assess site-city interactions in the low seismic frequency range: They involve the mass and rigidity of the city compared to those of the soil and the damping of the building.

  17. Theoretical Analysis on the Effect of Tunnel Excavation on Building strip foundation

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyan; Gu, Shuancheng; Huang, Rongbin

    2017-09-01

    In this paper, according to the characteristics of the ground settlement troughs curves, the influence of tunnel excavation on the effect of strip foundation was studied by inverse analysis firstly. The differential equation of the synergistic effect of the strip foundation and foundation under the tunnel excavation was established by using the equilibrium condition of the micro-element physical force. Then, the conceptual definite initial parameter method was used to solve the corresponding homogeneous equation. According to the plane section assumption, combined with the basic theory of material mechanics, considering the differential characteristic of hyperbolic trigonometric function, and using matlabmathmatica software, the theoretical calculation expression of displacement and internal force which is about the tunnel passes through the strip foundation was obtained. Finally, combined with engineering case analysis, changes of the relative position between the tunnel and the foundation, the influences of the main parameters on the foundation effect were studied. The results show that: The influence scope of the tunnel on the foundation is [-0.5 ~ 1.5] times of the foundation length, and when the tunnel center at the end of the foundation, there exists the maximum settlement. The parameters about the soil loss rate, the excavation section and the buried depth of the tunnel have great influence on the foundation effect. The change of foundation height has a great influence on its internal force.

  18. Theoretical Analysis of the Optical Propagation Characteristics in a Fiber-Optic Surface Plasmon Resonance Sensor

    PubMed Central

    Liu, Linlin; Yang, Jun; Yang, Zhong; Wan, Xiaoping; Hu, Ning; Zheng, Xiaolin

    2013-01-01

    Surface plasmon resonance (SPR) sensor is widely used for its high precision and real-time analysis. Fiber-optic SPR sensor is easy for miniaturization, so it is commonly used in the development of portable detection equipment. It can also be used for remote, real-time, and online detection. In this study, a wavelength modulation fiber-optic SPR sensor is designed, and theoretical analysis of optical propagation in the optical fiber is also done. Compared with existing methods, both the transmission of a skew ray and the influence of the chromatic dispersion are discussed. The resonance wavelength is calculated at two different cases, in which the chromatic dispersion in the fiber core is considered. According to the simulation results, a novel multi-channel fiber-optic SPR sensor is likewise designed to avoid defaults aroused by the complicated computation of the skew ray as well as the chromatic dispersion. Avoiding the impact of skew ray can do much to improve the precision of this kind of sensor. PMID:23748170

  19. Theoretical analysis of electromigration-induced failure of metallic thin films due to transgranular void propagation

    SciTech Connect

    Gungor, M.R.; Maroudas, D.

    1999-02-01

    Failure of metallic thin films driven by electromigration is among the most challenging materials reliability problems in microelectronics toward ultra-large-scale integration. One of the most serious failure mechanisms in thin films with bamboo grain structure is the propagation of transgranular voids, which may lead to open-circuit failure. In this article, a comprehensive theoretical analysis is presented of the complex nonlinear dynamics of transgranular voids in metallic thin films as determined by capillarity-driven surface diffusion coupled with drift induced by electromigration. Our analysis is based on self-consistent dynamical simulations of void morphological evolution and it is aided by the conclusions of an approximate linear stability theory. Our simulations emphasize that the strong dependence of surface diffusivity on void surface orientation, the strength of the applied electric field, and the void size play important roles in the dynamics of the voids. The simulations predict void faceting, formation of wedge-shaped voids due to facet selection, propagation of slit-like features emanating from void surfaces, open-circuit failure due to slit propagation, as well as appearance and disappearance of soliton-like features on void surfaces prior to failure. These predictions are in very good agreement with recent experimental observations during accelerated electromigration testing of unpassivated metallic films. The simulation results are used to establish conditions for the formation of various void morphological features and discuss their serious implications for interconnect reliability. {copyright} {ital 1999 American Institute of Physics.}

  20. Cognitive eloquence in neurosurgery: Insight from graph theoretical analysis of complex brain networks.

    PubMed

    Lang, Stefan

    2017-01-01

    The structure and function of the brain can be described by complex network models, and the topological properties of these models can be quantified by graph theoretical analysis. This has given insight into brain regions, known as hubs, which are critical for integrative functioning and information transfer, both fundamental aspects of cognition. In this manuscript a hypothesis is put forward for the concept of cognitive eloquence in neurosurgery; that is regions (cortical, subcortical and white matter) of the brain which may not necessarily have readily identifiable neurological function, but if injured may result in disproportionate cognitive morbidity. To this end, the effects of neurosurgical resection on cognition is reviewed and an overview of the role of complex network analysis in the understanding of brain structure and function is provided. The literature describing network, behavioral, and cognitive effects resulting from lesions to, and disconnections of, centralized hub regions will be emphasized as evidence for the espousal of the concept of cognitive eloquence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Limit analysis and homogenization of porous materials with Mohr-Coulomb matrix. Part I: Theoretical formulation

    NASA Astrophysics Data System (ADS)

    Anoukou, K.; Pastor, F.; Dufrenoy, P.; Kondo, D.

    2016-06-01

    The present two-part study aims at investigating the specific effects of Mohr-Coulomb matrix on the strength of ductile porous materials by using a kinematic limit analysis approach. While in the Part II, static and kinematic bounds are numerically derived and used for validation purpose, the present Part I focuses on the theoretical formulation of a macroscopic strength criterion for porous Mohr-Coulomb materials. To this end, we consider a hollow sphere model with a rigid perfectly plastic Mohr-Coulomb matrix, subjected to axisymmetric uniform strain rate boundary conditions. Taking advantage of an appropriate family of three-parameter trial velocity fields accounting for the specific plastic deformation mechanisms of the Mohr-Coulomb matrix, we then provide a solution of the constrained minimization problem required for the determination of the macroscopic dissipation function. The macroscopic strength criterion is then obtained by means of the Lagrangian method combined with Karush-Kuhn-Tucker conditions. After a careful analysis and discussion of the plastic admissibility condition associated to the Mohr-Coulomb criterion, the above procedure leads to a parametric closed-form expression of the macroscopic strength criterion. The latter explicitly shows a dependence on the three stress invariants. In the special case of a friction angle equal to zero, the established criterion reduced to recently available results for porous Tresca materials. Finally, both effects of matrix friction angle and porosity are briefly illustrated and, for completeness, the macroscopic plastic flow rule and the voids evolution law are fully furnished.

  2. Theoretical Analysis of Heat Pump Cycle Characteristics with Pure Refrigerants and Binary Refrigerant Mixtures

    NASA Astrophysics Data System (ADS)

    Kagawa, Noboru; Uematsu, Masahiko; Watanabe, Koichi

    In recent years there has been an increasing interest of the use of nonazeotropic binary mixtures to improve performance in heat pump systems, and to restrict the consumption of chlorofluorocarbon (CFC) refrigerants as internationally agreed-upon in the Montreal Protocol. However, the available knowledge on the thermophysical properties of mixtures is very much limited particularly with respect to quantitative information. In order to examine cycle performance for Refrigerant 12 (CCl2F2) + Refrigerant 22 (CHClF2) and Refrigerant 22 + Refrigerant 114 (CClF2-CClF2) systems which are technically important halogenated refrigerant mixtures, the heat pump cycle analysis in case of using pure Refrigerants 12, 22 and 114 was theoretically carried out in the present paper. For the purpose of systematizing the heat pump cycle characteristics with pure refrigerants, the cycle analysis for Refrigerants 502, 13B1, 152a, 717 (NH3) and 290 (C3H8) was also examined. It became clear that the maximum coefficients of performance with various refrigerants were obtained at the reduced condensing temperature being 0.9 when the same temperature difference between condensing and evaporating temperature was chosen.

  3. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    SciTech Connect

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-28

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.

  4. Theoretical analysis of the kinetics of DNA hybridization with gel-immobilized oligonucleotides.

    PubMed Central

    Livshits, M A; Mirzabekov, A D

    1996-01-01

    A new method of DNA sequencing by hybridization using a microchip containing a set of immobilized oligonucleotides is being developed. A theoretical analysis is presented of the kinetics of DNA hybridization with deoxynucleotide molecules chemically tethered in a polyacrylamide gel layer. The analysis has shown that long-term evolution of the spatial distribution and of the amount of DNA bound in a hybridization cell is governed by "retarded diffusion," i.e., diffusion of the DNA interrupted by repeated association and dissociation with immobile oligonucleotide molecules. Retarded diffusion determines the characteristic time of establishing a final equilibrium state in a cell, i.e., the state with the maximum quantity and a uniform distribution of bound DNA. In the case of cells with the most stable, perfect duplexes, the characteristic time of retarded diffusion (which is proportional to the equilibrium binding constant and to the concentration of binding sites) can be longer than the duration of the real hybridization procedure. This conclusion is indirectly confirmed by the observation of nonuniform fluorescence of labeled DNA in perfect-match hybridization cells (brighter at the edges). For optimal discrimination of perfect duplexes from duplexes with mismatches the hybridization process should be brought to equilibrium under low-temperature nonsaturation conditions for all cells. The kinetic differences between perfect and nonperfect duplexes in the gel allow further improvement in the discrimination through additional washing at low temperature after hybridization. Images FIGURE 1 PMID:8913616

  5. Electrochemical and theoretical analysis of the reactivity of shikonin derivatives: dissociative electron transfer in esterified compounds.

    PubMed

    Armendáriz-Vidales, Georgina; Frontana, Carlos

    2014-09-07

    An electrochemical and theoretical analysis of a series of shikonin derivatives in aprotic media is presented. Results showed that the first electrochemical reduction signal is a reversible monoelectronic transfer, generating a stable semiquinone intermediate; the corresponding E(I)⁰ values were correlated with calculated values of electroaccepting power (ω(+)) and adiabatic electron affinities (A(Ad)), obtained with BH and HLYP/6-311++G(2d,2p) and considering the solvent effect, revealing the influence of intramolecular hydrogen bonding and the substituting group at position C-2 in the experimental reduction potential. For the second reduction step, esterified compounds isobutyryl and isovalerylshikonin presented a coupled chemical reaction following dianion formation. Analysis of the variation of the dimensionless cathodic peak potential values (ξ(p)) as a function of the scan rate (v) functions and complementary experiments in benzonitrile suggested that this process follows a dissociative electron transfer, in which the rate of heterogeneous electron transfer is slow (~0.2 cm s(-1)), and the rate constant of the chemical process is at least 10(5) larger.

  6. Theoretical and experimental analysis of liquid layer instability in hybrid rocket engines

    NASA Astrophysics Data System (ADS)

    Kobald, Mario; Verri, Isabella; Schlechtriem, Stefan

    2015-03-01

    The combustion behavior of different hybrid rocket fuels has been analyzed in the frame of this research. Tests have been performed in a 2D slab burner configuration with windows on two sides. Four different liquefying paraffin-based fuels, hydroxyl terminated polybutadiene (HTPB) and high-density polyethylene (HDPE) have been tested in combination with gaseous oxygen (GOX). Experimental high-speed video data have been analyzed manually and with the proper orthogonal decomposition (POD) technique. Application of POD enables the recognition of the main structures of the flow field and the combustion flame appearing in the video data. These results include spatial and temporal analysis of the structures. For liquefying fuels these spatial values relate to the wavelengths associated to the Kelvin Helmholtz Instability (KHI). A theoretical long-wave solution of the KHI problem shows good agreement with the experimental results. Distinct frequencies found in the POD analysis can be related to the precombustion chamber configuration which can lead to vortex shedding phenomena.

  7. Accounting for the kinetics in order parameter analysis: lessons from theoretical models and a disordered peptide.

    PubMed

    Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco

    2012-11-21

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.

  8. Longitudinal, transverse, and single-particle dynamics in liquid Zn: Ab initio study and theoretical analysis

    NASA Astrophysics Data System (ADS)

    del Rio, B. G.; González, L. E.

    2017-06-01

    We perform ab initio molecular dynamics simulations of liquid Zn near the melting point in order to study the longitudinal and transverse dynamic properties of the system. We find two propagating excitations in both of them in a wide range of wave vectors. This is in agreement with some experimental observations of the dynamic structure factor in the region around half the position of the main peak. Moreover, the two-mode structure in the transverse and longitudinal current correlation functions had also been previously observed in high pressure liquid metallic systems. We perform a theoretical analysis in order to investigate the possible origin of such two components by resorting to mode-coupling theories. They are found to describe qualitatively the appearance of two modes in the dynamics, but their relative intensities are not quantitatively reproduced. We suggest some possible improvements of the theory through the analysis of the structure of the memory functions. We also analyze the single-particle dynamics embedded in the velocity autocorrelation function, and explain its characteristics through mode-coupling concepts.

  9. Vibrational analysis and formation mechanism of typical deep eutectic solvents: An experimental and theoretical study.

    PubMed

    Zhu, Siwen; Li, Hongping; Zhu, Wenshuai; Jiang, Wei; Wang, Chao; Wu, Peiwen; Zhang, Qi; Li, Huaming

    2016-07-01

    Deep eutectic solvents (DESs), as ionic liquid analogues for green solvents, have gained increasing attentions in chemistry. In this work, three typical kinds of DESs (ChCl/Gly, ChCl/AcOH and ChCl/Urea) were successfully synthesized and characterized by Fourier transform infrared spectroscopy (FTIR) and Raman. Then comprehensive and systematical analyses were performed by the methods of density functional theory (DFT). Two methods (B3LYP/6-311++G(2d,p) and dispersion-corrected B3LYP-D3/6-311++G(2d,p)) were employed to investigate the structures, vibrational frequencies and assign their ownership of vibrational modes for the DESs, respectively. Nearly all the experimental characteristic peaks of IR and Raman were identified according to the calculated results. By linear fitting of the combined calculated vs experimental vibration frequencies, it can be found that both of the two methods are excellent to reproduce the experimental results. Besides, hydrogen bonds were proved to exist in DESs by IR spectrum, structure analysis and RDG analysis. This work was aimed at predicting and understanding the vibrational spectra of the three typical DESs based on DFT methods. Moreover, by comparing experimental and theoretical results, it provides us a deep understanding of the formation mechanisms of DESs. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Moral distress: a comparative analysis of theoretical understandings and inter-related concepts.

    PubMed

    Lützén, Kim; Kvist, Beatrice Ewalds

    2012-03-01

    Research on ethical dilemmas in health care has become increasingly salient during the last two decades resulting in confusion about the concept of moral distress. The aim of the present paper is to provide an overview and a comparative analysis of the theoretical understandings of moral distress and related concepts. The focus is on five concepts: moral distress, moral stress, stress of conscience, moral sensitivity and ethical climate. It is suggested that moral distress connects mainly to a psychological perspective; stress of conscience more to a theological-philosophical standpoint; and moral stress mostly to a physiological perspective. Further analysis indicates that these thoughts can be linked to the concepts of moral sensitivity and ethical climate through a relationship to moral agency. Moral agency comprises a moral awareness of moral problems and moral responsibility for others. It is suggested that moral distress may serve as a positive catalyst in exercising moral agency. An interdisciplinary approach in research and practice broadens our understanding of moral distress and its impact on health care personnel and patient care.

  11. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    NASA Astrophysics Data System (ADS)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-01

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.

  12. Correlation between a 2D simple image analysis method and 3D bony motion during the pivot shift test.

    PubMed

    Arilla, Fabio V; Rahnemai-Azar, Amir Ata; Yacuzzi, Carlos; Guenther, Daniel; Engel, Benjamin S; Fu, Freddie H; Musahl, Volker; Debski, Richard E

    2016-12-01

    The pivot shift test is the most specific clinical test to detect anterior cruciate ligament injury. The purpose of this study was to determine the correlation between the 2D simple image analysis method and the 3D bony motion of the knee during the pivot shift test and assess the intra- and inter-examiner agreements. Three orthopedic surgeons performed three trials of the standardized pivot shift test in seven knees. Two devices were used to measure motion of the lateral knee compartment simultaneously: 1) 2D simple image analysis method: translation was determined using a tablet computer with custom motion tracking software that quantified movement of three markers attached to skin over bony landmarks; 2) 3D bony motion: electromagnetic tracking system was used to measure movement of the same bony landmarks. The 2D simple image analysis method demonstrated a good correlation with the 3D bony motion (Pearson correlation: 0.75, 0.76 and 0.79). The 3D bony translation increased by 2.7 to 3.5 times for every unit increase measured by the 2D simple image analysis method. The mean intra-class correlation coefficients for the three examiners were 0.6 and 0.75, respectively for 3D bony motion and 2D image analyses, while the inter-examiner agreement was 0.65 and 0.73, respectively. The 2D simple image analysis method results are related to 3D bony motion of the lateral knee compartment, even with skin artifact present. This technique is a non-invasive and repeatable tool to quantify the motion of the lateral knee compartment during the pivot shift test. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Component Analysis of Simple Span vs. Complex Span Adaptive Working Memory Exercises: A Randomized, Controlled Trial

    PubMed Central

    Gibson, Bradley S.; Kronenberger, William G.; Gondoli, Dawn M.; Johnson, Ann C.; Morrissey, Rebecca A.; Steeger, Christine M.

    2012-01-01

    There has been growing interest in using adaptive training interventions such as Cogmed-RM to increase the capacity of working memory (WM), but this intervention may not be optimally designed. For instance, Cogmed-RM can target the primary memory (PM) component of WM capacity, but not the secondary memory (SM) component. The present study hypothesized that Cogmed-RM does not target SM capacity because the simple span exercises it uses may not cause a sufficient amount of information to be lost from PM during training. To investigate, we randomly assigned participants to either a standard (simple span; N = 31) or a modified (complex span; N = 30) training condition. The main findings showed that SM capacity did not improve, even in the modified training condition. Hence, the potency of span-based WM interventions cannot be increased simply by converting simple span exercises into complex span exercises. PMID:23066524

  14. Theoretical analysis of saturation and limit cycles in short pulse FEL oscillators

    SciTech Connect

    Piovella, N.; Chaix, P.; Jaroszynski, D.

    1995-12-31

    We derive a model for the non linear evolution of a short pulse oscillator from low signal up to saturation in the small gain regime. This system is controlled by only two independent parameters: cavity detuning and losses. Using a closure relation, this model reduces to a closed set of 5 non linear partial differential equations for the EM field and moments of the electron distribution. An analysis of the linearised system allows to define and calculate the eigenmodes characterising the small signal regime. An arbitrary solution of the complete nonlinear system can then be expanded in terms of these eigenmodes. This allows interpreting various observed nonlinear behaviours, including steady state saturation, limit cycles, and transition to chaos. The single mode approximation reduces to a Landau-Ginzburg equation. It allows to obtain gain, nonlinear frequency shift, and efficiency as functions of cavity detuning and cavity losses. A generalisation to two modes allows to obtain a simple description of the limit cycle behaviour, as a competition between these two modes. An analysis of the transitions to more complex dynamics is also given. Finally, the analytical results are compared to the experimental data from the FELIX experiment.

  15. Theoretical analysis of NMR experiments in normal and superconducting states of high- Tc superconductors

    NASA Astrophysics Data System (ADS)

    Mack, Frank; Kulić, Miodrag L.; Mehring, Michael

    1998-01-01

    The Knight shift and T1- and T2-rates of YBa 2Cu 3O 6+ x in the normal and superconducting state are modeled by calculating the magnetic susceptibility in the bi-layer Hubbard model within various approximations. An optimal set of parameters (OSP) is found in the RPA approximation which fits experiments on YBCO for optimal and nearly optimal doping. The analysis of the self-consistent FLEX approximation for the particle self-energy and susceptibility shows that the latter is renormalized quantitatively but not qualitatively. The differences in the oxygen and copper T1-rates are explained by using the OSP parameters and assuming the finite hyperfine coupling C‧ between 17O and next-nearest neighboring Cu spins. The numerical analysis of T1-1 and T2-1 and the ratio 63T1 ab-1/ 63T1 c-1 in the superconducting state supports strongly the idea of d-wave pairing in YBa 2Cu 3O 7 with much stronger intraplane rather than interplane pairing. It is also shown that the simple RPA or FLEX approximations are inadequate in explaining NMR data in underdoped YBCO systems, where antiferromagnetic fluctuations are very pronounced.

  16. Experimental and Theoretical Analysis of Central H{sub {beta}} Asymmetry

    SciTech Connect

    Demura, A. V.; Demchenko, G. V.; Djurovic, S.; Cirisan, M.; Nikolic, D.; Gigosos, M. A.; Gonzalez, M. A.

    2008-10-22

    The hydrogen Balmer beta line is used as a plasma diagnostics tool for a long time. It is well known that the experimental profiles of H{sub {beta}} line exhibit an asymmetry while some of most commonly used theoretical models, due to the employed approximations, give unshifted and symmetrical profiles. In the present work the central part of H{sub {beta}} profile is reanalyzed experimentally and in terms of two theoretical approaches based correspondingly on the Standard Theory (ST) assumptions and on the electric field computer simulation method. The present experimental and theoretical results are compared with the obtained earlier experimental and theoretical data.

  17. Hemodynamic energy dissipation in the cardiovascular system: generalized theoretical analysis on disease states.

    PubMed

    Dasi, Lakshmi P; Pekkan, Kerem; de Zelicourt, Diane; Sundareswaran, Kartik S; Krishnankutty, Resmi; Delnido, Pedro J; Yoganathan, Ajit P

    2009-04-01

    We present a fundamental theoretical framework for analysis of energy dissipation in any component of the circulatory system and formulate the full energy budget for both venous and arterial circulations. New indices allowing disease-specific subject-to-subject comparisons and disease-to-disease hemodynamic evaluation (quantifying the hemodynamic severity of one vascular disease type to the other) are presented based on this formalism. Dimensional analysis of energy dissipation rate with respect to the human circulation shows that the rate of energy dissipation is inversely proportional to the square of the patient body surface area and directly proportional to the cube of cardiac output. This result verified the established formulae for energy loss in aortic stenosis that was solely derived through empirical clinical experience. Three new indices are introduced to evaluate more complex disease states: (1) circulation energy dissipation index (CEDI), (2) aortic valve energy dissipation index (AV-EDI), and (3) total cavopulmonary connection energy dissipation index (TCPC-EDI). CEDI is based on the full energy budget of the circulation and is the proper measure of the work performed by the ventricle relative to the net energy spent in overcoming frictional forces. It is shown to be 4.01+/-0.16 for healthy individuals and above 7.0 for patients with severe aortic stenosis. Application of CEDI index on single-ventricle venous physiology reveals that the surgically created Fontan circulation, which is indeed palliative, progressively degrades in hemodynamic efficiency with growth (p<0.001), with the net dissipation in a typical Fontan patient (Body surface area=1.0 m(2)) being equivalent to that of an average case of severe aortic stenosis. AV-EDI is shown to be the proper index to gauge the hemodynamic severity of stenosed aortic valves as it accurately reflects energy loss. It is about 0.28+/-0.12 for healthy human valves. Moderate aortic stenosis has an AV-EDI one

  18. An effectiveness analysis of healthcare systems using a systems theoretic approach

    PubMed Central

    Chuang, Sheuwen; Inder, Kerry

    2009-01-01

    Background The use of accreditation and quality measurement and reporting to improve healthcare quality and patient safety has been widespread across many countries. A review of the literature reveals no association between the accreditation system and the quality measurement and reporting systems, even when hospital compliance with these systems is satisfactory. Improvement of health care outcomes needs to be based on an appreciation of the whole system that contributes to those outcomes. The research literature currently lacks an appropriate analysis and is fragmented among activities. This paper aims to propose an integrated research model of these two systems and to demonstrate the usefulness of the resulting model for strategic research planning. Methods/design To achieve these aims, a systematic integration of the healthcare accreditation and quality measurement/reporting systems is structured hierarchically. A holistic systems relationship model of the administration segment is developed to act as an investigation framework. A literature-based empirical study is used to validate the proposed relationships derived from the model. Australian experiences are used as evidence for the system effectiveness analysis and design base for an adaptive-control study proposal to show the usefulness of the system model for guiding strategic research. Results Three basic relationships were revealed and validated from the research literature. The systemic weaknesses of the accreditation system and quality measurement/reporting system from a system flow perspective were examined. The approach provides a system thinking structure to assist the design of quality improvement strategies. The proposed model discovers a fourth implicit relationship, a feedback between quality performance reporting components and choice of accreditation components that is likely to play an important role in health care outcomes. An example involving accreditation surveyors is developed that

  19. Field theoretic analysis of a class of planar microwave and optoelectronic structures

    NASA Astrophysics Data System (ADS)

    Hahm, Yeon-Chang

    2000-11-01

    With increasing operating frequencies in CMOS RF/microwave integrated circuits, the performance of on- chip interconnects is becoming significantly affected by the lossy substrate. It is the purpose of the first part of this thesis to develop a rigorous field theoretic analysis approach for efficient characterization of single and multiple coupled interconnects on silicon substrate, which is applicable over a wide range of substrate resistivities. The frequency-dependent transmission line parameters of a microstrip on silicon are determined by a new formulation based on a quasi- electrostatic and quasi-magnetostatic spectral domain approach. It is demonstrated that this new quasi-static formulation provides the complete frequency-dependent interconnect characteristics for all three major transmission line modes of operation. In particular, it is shown that in the case of heavily doped CMOS substrates, the distributed series inductance and series resistance parameters are significantly affected by the presence of longitudinal substrate currents giving rise to the substrate skin-effect. The method is further extended to multiple coupled single and multi-level interconnect structures with ground plane and multiple coupled co-planar stripline structures without ground plane. The finite conductor thickness is taken into account in terms of a stacked conductor model. The new quasi-static approach is validated by comparison with results obtained with a full-wave spectral domain method and the commercial planar full-wave electromagnetic field solver HP/Momentum®, as well as published simulation and measurement data. In the second part of this thesis, coupled planar optical interconnect structures are investigated based on a rigorous field theoretic analysis combined with an application of the normal mode theory for coupled transmission lines. A new transfer matrix description for a general optical directional coupler is presented. Based on this transfer matrix formulation

  20. Language in the brain at rest: new insights from resting state data and graph theoretical analysis

    PubMed Central

    Muller, Angela M.; Meyer, Martin

    2014-01-01

    In humans, the most obvious functional lateralization is the specialization of the left hemisphere for language. Therefore, the involvement of the right hemisphere in language is one of the most remarkable findings during the last two decades of fMRI research. However, the importance of this finding continues to be underestimated. We examined the interaction between the two hemispheres and also the role of the right hemisphere in language. From two seeds representing Broca's area, we conducted a seed correlation analysis (SCA) of resting state fMRI data and could identify a resting state network (RSN) overlapping to significant extent with a language network that was generated by an automated meta-analysis tool. To elucidate the relationship between the clusters of this RSN, we then performed graph theoretical analyses (GTA) using the same resting state dataset. We show that the right hemisphere is clearly involved in language. A modularity analysis revealed that the interaction between the two hemispheres is mediated by three partitions: A bilateral frontal partition consists of nodes representing the classical left sided language regions as well as two right-sided homologs. The second bilateral partition consists of nodes from the right frontal, the left inferior parietal cortex as well as of two nodes within the posterior cerebellum. The third partition is also bilateral and comprises five regions from the posterior midline parts of the brain to the temporal and frontal cortex, two of the nodes are prominent default mode nodes. The involvement of this last partition in a language relevant function is a novel finding. PMID:24808843