Sample records for modeling tlm method

  1. Predicting plant protein subcellular multi-localization by Chou's PseAAC formulation based multi-label homolog knowledge transfer learning.

    PubMed

    Mei, Suyu

    2012-10-07

    Recent years have witnessed much progress in computational modeling for protein subcellular localization. However, there are far few computational models for predicting plant protein subcellular multi-localization. In this paper, we propose a multi-label multi-kernel transfer learning model for predicting multiple subcellular locations of plant proteins (MLMK-TLM). The method proposes a multi-label confusion matrix and adapts one-against-all multi-class probabilistic outputs to multi-label learning scenario, based on which we further extend our published work MK-TLM (multi-kernel transfer learning based on Chou's PseAAC formulation for protein submitochondria localization) for plant protein subcellular multi-localization. By proper homolog knowledge transfer, MLMK-TLM is applicable to novel plant protein subcellular localization in multi-label learning scenario. The experiments on plant protein benchmark dataset show that MLMK-TLM outperforms the baseline model. Unlike the existing models, MLMK-TLM also reports its misleading tendency, which is important for comprehensive survey of model's multi-labeling performance. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Tetrahedral node for Transmission-Line Modeling (TLM) applied to Bio-heat Transfer.

    PubMed

    Milan, Hugo F M; Gebremedhin, Kifle G

    2016-12-01

    Transmission-Line Modeling (TLM) is a numerical method used to solve complex and time-domain bio-heat transfer problems. In TLM, parallelepipeds are used to discretize three-dimensional problems. The drawback in using parallelepiped shapes is that instead of refining only the domain of interest, a large additional domain would also have to be refined, which results in increased computational time and memory space. In this paper, we developed a tetrahedral node for TLM applied to bio-heat transfer that does not have the drawback associated with the parallelepiped node. The model includes heat source, blood perfusion, boundary conditions and initial conditions. The boundary conditions could be adiabatic, temperature, heat flux, or convection. The predicted temperature and heat flux were compared against results from an analytical solution and the results agreed within 2% for a mesh size of 69,941 nodes and a time step of 5ms. The method was further validated against published results of maximum skin-surface temperature difference in a breast with and without tumor and the results agreed within 6%. The published results were obtained from a model that used parallelepiped TLM node. An open source software, TLMBHT, was written using the theory developed herein and is available for download free-of-charge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Triangular node for Transmission-Line Modeling (TLM) applied to bio-heat transfer.

    PubMed

    Milan, Hugo F M; Gebremedhin, Kifle G

    2016-12-01

    Transmission-Line Modeling (TLM) is a numerical method used to solve complex and time-domain bio-heat transfer problems. In TLM, rectangles are used to discretize two-dimensional problems. The drawback in using rectangular shapes is that instead of refining only the domain of interest, a large additional domain will also be refined in the x and y axes, which results in increased computational time and memory space. In this paper, we developed a triangular node for TLM applied to bio-heat transfer that does not have the drawback associated with the rectangular nodes. The model includes heat source, blood perfusion (advection), boundary conditions and initial conditions. The boundary conditions could be adiabatic, temperature, heat flux, or convection. A matrix equation for TLM, which simplifies the solution of time-domain problems or solves steady-state problems, was also developed. The predicted results were compared against results obtained from the solution of a simplified two-dimensional problem, and they agreed within 1% for a mesh length of triangular faces of 59µm±9µm (mean±standard deviation) and a time step of 1ms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  5. Advanced RF Sources Based on Novel Nonlinear Transmission Lines

    DTIC Science & Technology

    2015-01-26

    microwave (HPM) sources. It is also critical to thin film devices and integrated circuits, carbon nanotube based cathodes and interconnects, field emitters ... line model (TLM) in Fig. 6b. Our model is compared with TLM, shown in Fig. 7a. When the interface resistance rc is small, TLM becomes inaccurate...due to current crowding. Fig. 6. (a) Electrical contact including specific interfacial resistivity ρc, and (b) its transmission line model

  6. Preparation and Characterization of Three Tilmicosin-loaded Lipid Nanoparticles: Physicochemical Properties and in-vitro Antibacterial Activities.

    PubMed

    Al-Qushawi, Alwan; Rassouli, Ali; Atyabi, Fatemeh; Peighambari, Seyed Mostafa; Esfandyari-Manesh, Mehdi; Shams, Gholam Reza; Yazdani, Azam

    2016-01-01

    Tilmicosin (TLM) is an important antibiotic in veterinary medicine with low bioavailability and safety. This study aimed to formulate and evaluate physicochemical properties, storage stability after lyophilization, and antibacterial activity of three TLM-loaded lipid nanoparticles (TLM-LNPs) including solid lipid nanoparticles (SLNs), nanostructured lipid carriers (NLCs), and lipid-core nanocapsules (LNCs). Physicochemical parameters such as particle size-mean diameter, polydispersity index, zeta potential, drug encapsulation efficiency (EE), loading capacity, and morphology of the formulations were evaluated and the effects of various cryoprotectants during lyophilization and storage for 8 weeks were also studied. The profiles of TLM release and the antibacterial activities of these TLM-LNPs suspensions (against Escherichia coli and Staphylococcus aureus ) were tested in comparison with their corresponding powders. TLM-LNPs suspensions were in nano-scale range with mean diameters of 186.3 ± 1.5, 149.6 ± 3.0, and 85.0 ± 1.0nm, and also EE, 69.1, 86.3, and 94.3% for TLM- SLNs, TLM-NLCs, and TLM- LNCs respectively. TLM-LNCs gave the best results with significantly low particle size and high EE (p<0.05). Mannitol was the most effective cryoprotectant for lyophilization and storage of TLM-LNPs. The drug release profiles were biphasic and the release times were longer at pH 7.4 where TLM-NLCs and TLM-LNCs powders showed longer release times. In microbiological tests, S. aureus was about 4 times more sensitive than E. coli to TLM-LNPs with minimum inhibitory concentration ranges of 0.5-1.0 and 2-4 µg/mL respectively, and TLM-LNCs exhibited the best antibacterial activities. In conclusion, TLM-LNP formulations especially TLM-LNCs and TLM-NLCs are promising carriers for TLM with better drug encapsulation capacity, release behavior, and antibacterial activity.

  7. Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Defining the electromagnetic environment inside a graphite composite fairing due to near-by lightning strikes is of interest to spacecraft developers. This effort develops a transmission-line-matrix (TLM) model with a CST Microstripes to examine induced voltages. on interior wire loops in a composite fairing due to a simulated near-by lightning strike. A physical vehicle-like composite fairing test fixture is constructed to anchor a TLM model in the time domain and a FEKO method of moments model in the frequency domain. Results show that a typical graphite composite fairing provides adequate shielding resulting in a significant reduction in induced voltages on high impedance circuits despite minimal attenuation of peak magnetic fields propagating through space in near-by lightning strike conditions.

  8. Preparation and Characterization of Three Tilmicosin-loaded Lipid Nanoparticles: Physicochemical Properties and in-vitro Antibacterial Activities

    PubMed Central

    Al-Qushawi, Alwan; Rassouli, Ali; Atyabi, Fatemeh; Peighambari, Seyed Mostafa; Esfandyari-Manesh, Mehdi; Shams, Gholam Reza; Yazdani, Azam

    2016-01-01

    Tilmicosin (TLM) is an important antibiotic in veterinary medicine with low bioavailability and safety. This study aimed to formulate and evaluate physicochemical properties, storage stability after lyophilization, and antibacterial activity of three TLM-loaded lipid nanoparticles (TLM-LNPs) including solid lipid nanoparticles (SLNs), nanostructured lipid carriers (NLCs), and lipid-core nanocapsules (LNCs). Physicochemical parameters such as particle size-mean diameter, polydispersity index, zeta potential, drug encapsulation efficiency (EE), loading capacity, and morphology of the formulations were evaluated and the effects of various cryoprotectants during lyophilization and storage for 8 weeks were also studied. The profiles of TLM release and the antibacterial activities of these TLM-LNPs suspensions (against Escherichia coli and Staphylococcus aureus) were tested in comparison with their corresponding powders. TLM-LNPs suspensions were in nano-scale range with mean diameters of 186.3 ± 1.5, 149.6 ± 3.0, and 85.0 ± 1.0nm, and also EE, 69.1, 86.3, and 94.3% for TLM- SLNs, TLM-NLCs, and TLM- LNCs respectively. TLM-LNCs gave the best results with significantly low particle size and high EE (p<0.05). Mannitol was the most effective cryoprotectant for lyophilization and storage of TLM-LNPs. The drug release profiles were biphasic and the release times were longer at pH 7.4 where TLM-NLCs and TLM-LNCs powders showed longer release times. In microbiological tests, S. aureus was about 4 times more sensitive than E. coli to TLM-LNPs with minimum inhibitory concentration ranges of 0.5-1.0 and 2-4 µg/mL respectively, and TLM-LNCs exhibited the best antibacterial activities. In conclusion, TLM-LNP formulations especially TLM-LNCs and TLM-NLCs are promising carriers for TLM with better drug encapsulation capacity, release behavior, and antibacterial activity. PMID:28261309

  9. Comprehensive study of numerical anisotropy and dispersion in 3-D TLM meshes

    NASA Astrophysics Data System (ADS)

    Berini, Pierre; Wu, Ke

    1995-05-01

    This paper presents a comprehensive analysis of the numerical anisotropy and dispersion of 3-D TLM meshes constructed using several generalized symmetrical condensed TLM nodes. The dispersion analysis is performed in isotropic lossless, isotropic lossy and anisotropic lossless media and yields a comparison of the simulation accuracy for the different TLM nodes. The effect of mesh grading on the numerical dispersion is also determined. The results compare meshes constructed with Johns' symmetrical condensed node (SCN), two hybrid symmetrical condensed nodes (HSCN) and two frequency domain symmetrical condensed nodes (FDSCN). It has been found that under certain circumstances, the time domain nodes may introduce numerical anisotropy when modelling isotropic media.

  10. Parallel 3D-TLM algorithm for simulation of the Earth-ionosphere cavity

    NASA Astrophysics Data System (ADS)

    Toledo-Redondo, Sergio; Salinas, Alfonso; Morente-Molinera, Juan Antonio; Méndez, Antonio; Fornieles, Jesús; Portí, Jorge; Morente, Juan Antonio

    2013-03-01

    A parallel 3D algorithm for solving time-domain electromagnetic problems with arbitrary geometries is presented. The technique employed is the Transmission Line Modeling (TLM) method implemented in Shared Memory (SM) environments. The benchmarking performed reveals that the maximum speedup depends on the memory size of the problem as well as multiple hardware factors, like the disposition of CPUs, cache, or memory. A maximum speedup of 15 has been measured for the largest problem. In certain circumstances of low memory requirements, superlinear speedup is achieved using our algorithm. The model is employed to model the Earth-ionosphere cavity, thus enabling a study of the natural electromagnetic phenomena that occur in it. The algorithm allows complete 3D simulations of the cavity with a resolution of 10 km, within a reasonable timescale.

  11. Persufflation Improves Pancreas Preservation When Compared With the Two-Layer Method

    PubMed Central

    Scott, W.E.; O'Brien, T.D.; Ferrer-Fabrega, J.; Avgoustiniatos, E.S.; Weegman, B.P.; Anazawa, T.; Matsumoto, S.; Kirchner, V.A.; Rizzari, M.D.; Murtaugh, M.P.; Suszynski, T.M.; Aasheim, T.; Kidder, L.S.; Hammer, B.E.; Stone, S.G.; Tempelman, L.; Sutherland, D.E.R.; Hering, B.J.; Papas, K.K.

    2010-01-01

    Islet transplantation is emerging as a promising treatment for patients with type 1 diabetes. It is important to maximize viable islet yield for each organ due to scarcity of suitable human donor pancreata, high cost, and the high dose of islets required for insulin independence. However, organ transport for 8 hours using the two-layer method (TLM) frequently results in lower islet yields. Since efficient oxygenation of the core of larger organs (eg, pig, human) in TLM has recently come under question, we investigated oxygen persufflation as an alternative way to supply the pancreas with oxygen during preservation. Porcine pancreata were procured from non–heart-beating donors and preserved by either TLM or persufflation for 24 hours and fixed. Biopsies were collected from several regions of the pancreas, sectioned, stained with hematoxylin and eosin, and evaluated by a histologist. Persufflated tissues exhibited distended capillaries due to gas perfusion and significantly less autolysis/cell death than regions not exposed to persufflation or tissues exposed to TLM. The histology presented here suggests that after 24 hours of preservation, persufflation dramatically improves tissue health when compared with TLM. These results indicate the potential for persufflation to improve viable islet yields and extend the duration of preservation, allowing more donor organs to be utilized. PMID:20692396

  12. Antimycobacterial action of thiolactomycin: an inhibitor of fatty acid and mycolic acid synthesis.

    PubMed Central

    Slayden, R A; Lee, R E; Armour, J W; Cooper, A M; Orme, I M; Brennan, P J; Besra, G S

    1996-01-01

    Thiolactomycin (TLM) possesses in vivo antimycobacterial activity against the saprophytic strain Mycobacterium smegmatis mc2155 and the virulent strain M. tuberculosis Erdman, resulting in complete inhibition of growth on solid media at 75 and 25 micrograms/ml, respectively. Use of an in vitro murine macrophage model also demonstrated the killing of viable intracellular M. tuberculosis in a dose-dependent manner. Through the use of in vivo [1,2-14C]acetate labeling of M. smegmatis, TLM was shown to inhibit the synthesis of both fatty acids and mycolic acids. However, synthesis of the shorter-chain alpha'-mycolates of M. smegmatis was not inhibited by TLM, whereas synthesis of the characteristic longer-chain alpha-mycolates and epoxymycolates was almost completely inhibited at 75 micrograms/ml. The use of M. smegmatis cell extracts demonstrated that TLM specifically inhibited the mycobacterial acyl carrier protein-dependent type II fatty acid synthase (FAS-II) but not the multifunctional type I fatty acid synthase (FAS-I). In addition, selective inhibition of long-chain mycolate synthesis by TLM was demonstrated in a dose-response manner in purified, cell wall-containing extracts of M. smegmatis cells. The in vivo and in vitro data and knowledge of the mechanism of TLM resistance in Escherichia coli suggest that two distinct TLM targets exist in mycobacteria, the beta-ketoacyl-acyl carrier protein synthases involved in FAS-II and the elongation steps leading to the synthesis of the alpha-mycolates and oxygenated mycolates. The efficacy of TLM against M. smegmatis and M. tuberculosis provides the prospects of identifying fatty acid and mycolic acid biosynthetic genes and revealing a novel range of chemotherapeutic agents directed against M. tuberculosis. PMID:9124847

  13. Global Modeling and Data Assimilation. Volume 11; Documentation of the Tangent Linear and Adjoint Models of the Relaxed Arakawa-Schubert Moisture Parameterization of the NASA GEOS-1 GCM; 5.2

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael

    1997-01-01

    A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.

  14. On the Direct Assimilation of Along-track Sea Surface Height Observations into a Free-surface Ocean Model Using a Weak Constraints Four Dimensional Variational (4dvar) Method

    NASA Astrophysics Data System (ADS)

    Ngodock, H.; Carrier, M.; Smith, S. R.; Souopgui, I.; Martin, P.; Jacobs, G. A.

    2016-02-01

    The representer method is adopted for solving a weak constraints 4dvar problem for the assimilation of ocean observations including along-track SSH, using a free surface ocean model. Direct 4dvar assimilation of SSH observations along the satellite tracks requires that the adjoint model be integrated with Dirac impulses on the right hand side of the adjoint equations for the surface elevation equation. The solution of this adjoint model will inevitably include surface gravity waves, and it constitutes the forcing for the tangent linear model (TLM) according to the representer method. This yields an analysis that is contaminated by gravity waves. A method for avoiding the generation of the surface gravity waves in the analysis is proposed in this study; it consists of removing the adjoint of the free surface from the right hand side (rhs) of the free surface mode in the TLM. The information from the SSH observations will still propagate to all other variables via the adjoint of the balance relationship between the barotropic and baroclinic modes, resulting in the correction to the surface elevation. Two assimilation experiments are carried out in the Gulf of Mexico: one with adjoint forcing included on the rhs of the TLM free surface equation, and the other without. Both analyses are evaluated against the assimilated SSH observations, SSH maps from Aviso and independent surface drifters, showing that the analysis that did not include adjoint forcing in the free surface is more accurate. This study shows that when a weak constraint 4dvar approach is considered for the assimilation of along-track SSH observations using a free surface model, with the aim of correcting the mesoscale circulation, an independent model error should not be assigned to the free surface.

  15. A Theoretical Model of Team-Licensed Merchandise Purchasing (TLMP)

    ERIC Educational Resources Information Center

    Lee, Donghun; Trail, Galen

    2011-01-01

    Although it is evident that sales of team licensed merchandise (TLM) contribute to the overall consumption of sport, research efforts that comprehensively describe what triggers the consumption of TLM is lacking (Lee, Trail, Kwon, & Anderson, 2011). Therefore, based on multiple theories (i.e., values theory, identity theory, attitude theory, and…

  16. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    NASA Astrophysics Data System (ADS)

    Simmons, Daniel; Cools, Kristof; Sewell, Phillip

    2016-11-01

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.

  17. A hybrid Boundary Element Unstructured Transmission-line (BEUT) method for accurate 2D electromagnetic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip

    Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less

  18. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Time-lapse microscopy and image analysis in basic and clinical embryo development research.

    PubMed

    Wong, C; Chen, A A; Behr, B; Shen, S

    2013-02-01

    Mammalian preimplantation embryo development is a complex process in which the exact timing and sequence of events are as essential as the accurate execution of the events themselves. Time-lapse microscopy (TLM) is an ideal tool to study this process since the ability to capture images over time provides a combination of morphological, dynamic and quantitative information about developmental events. Here, we systematically review the application of TLM in basic and clinical embryo research. We identified all relevant preimplantation embryo TLM studies published in English up to May 2012 using PubMed and Google Scholar. We then analysed the technical challenges involved in embryo TLM studies and how these challenges may be overcome with technological innovations. Finally, we reviewed the different types of TLM embryo studies, with a special focus on how TLM can benefit clinical assisted reproduction. Although new parameters predictive of embryo development potential may be discovered and used clinically to potentially increase the success rate of IVF, adopting TLM to routine clinical practice will require innovations in both optics and image analysis. Combined with such innovations, TLM may provide embryologists and clinicians with an important tool for making critical decisions in assisted reproduction. In this review, we perform a literature search of all published early embryo development studies that used time-lapse microscopy (TLM). From the literature, we discuss the benefits of TLM over traditional time-point analysis, as well as the technical difficulties and solutions involved in implementing TLM for embryo studies. We further discuss research that has successfully derived non-invasive markers that may increase the success rate of assisted reproductive technologies, primarily IVF. Most notably, we extend our discussion to highlight important considerations for the practical use of TLM in research and clinical settings. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  20. Transoral robotic surgery vs transoral laser microsurgery for resection of supraglottic cancer: a pilot surgery.

    PubMed

    Ansarin, Mohssen; Zorzi, Stefano; Massaro, Maria Angela; Tagliabue, Marta; Proh, Michele; Giugliano, Gioacchino; Calabrese, Luca; Chiesa, Fausto

    2014-03-01

    Transoral laser microsurgery (TLM) is a mature approach to supraglottic cancer, while transoral robotic surgery (TORS) is emerging. The present study compared these approaches. The first 10 patients (2002-2005) given TLM were compared with the first 10 (2007-2011) given TORS for cT1-3 cN0-cN2c supraglottic cancer. A feeding tube was used in four TLM and seven TORS patients. Margins were more often positive, but operating times shorter, in TORS. All 10 TORS patients are without evidence of disease, but only six TLM patients remain disease-free after much longer follow-up. TORS was considerably less uncomfortable and fatiguing for the surgeon. TORS seems as safe and effective as TLM. Shorter TORS operating times are probably attributable to prior experience with TLM. For laryngeal exposure, length of tube placement and margin evaluability, TLM was superior; however, this may change as TORS develops and transoral robotic instruments are optimized. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Discrete Huygens’ modeling for the characterization of a sound absorbing medium

    NASA Astrophysics Data System (ADS)

    Chai, L.; Kagawa, Y.

    2007-07-01

    Based on the equivalence between the wave propagation in the electrical transmission-lines and acoustic tubes, the authors proposed the use of the transmission-line matrix modeling (TLM) for time-domain solution method of the sound field. TLM is known in electromagnetic engineering community, which is equivalent to the discrete Huygens' modeling. The wave propagation is simulated by tracing the sequences of the transmission and scattering of impulses. The theory and the demonstrated examples are presented in the references, in which a sound absorbing field was preliminarily considered to be a medium with simple acoustic resistance independent of frequency and the angle of incidence for the absorbing layer placed on the room wall surface. The present work is concerned with the time-domain response for the characterization of the sound absorbing materials. A lossy component with variable propagation velocity is introduced for sound absorbing materials to facilitate the energy consumption. The frequency characteristics of the absorption coefficient are also considered for the normal, oblique and random incidence. Some numerical demonstrations show that the present modeling provide a reasonable modeling of the homogeneous sound absorbing materials in time domain.

  2. Method for contact resistivity measurements on photovoltaic cells and cell adapted for such measurement

    NASA Technical Reports Server (NTRS)

    Burger, Dale R. (Inventor)

    1986-01-01

    A method is disclosed for scribing at least three grid contacts of a photovoltaic cell to electrically isolate them from the grid contact pattern used to collect solar current generated by the cell, and using the scribed segments for determining parameters of the cell by a combination of contact end resistance (CER) measurements using a minimum of three equally or unequally spaced lines, and transmission line modal (TLM) measurements using a minimum of four unequally spaced lines. TLM measurements may be used to determine sheet resistance under the contact, R.sub.sk, while CER measurements are used to determine contact resistivity, .rho..sub.c, from a nomograph of contact resistivity as a function of contact end resistance and sheet resistivity under the contact. In some cases, such as the case of silicon photovoltaic cells, sheet resistivity under the contact may be assumed to be equal to the known sheet resistance, R.sub.s, of the semiconductor material, thereby obviating the need for TLM measurements to determine R.sub.sk.

  3. A mixed-methods evaluation of complementary therapy services in palliative care: yoga and dance therapy.

    PubMed

    Selman, L E; Williams, J; Simms, V

    2012-01-01

    To inform service provision and future research, we evaluated two complementary therapy services: yoga classes and dance therapy [The Lebed Method (TLM)]. Both were run as 6-week group courses. Patients completed the Measure Yourself Concerns and Wellbeing questionnaire pre- and post-course. Mean change over time was calculated for patient-nominated concern and well-being scores. Qualitative data regarding factors affecting health other than the therapy and benefits of the service were analysed using content analysis. Eighteen patients participated (mean age 63.8 years; 16 female; 14 cancer diagnoses); 10 were doing yoga, five TLM, and three both yoga and TLM; 14 completed more than one assessed course. Patients' most prevalent concerns were: mobility/fitness (n= 20), breathing problems (n= 20), arm, shoulder and neck problems (n= 18), difficulty relaxing (n= 8), back/postural problems (n= 8), fear/anxiety (n= 5). Factors affecting patients' health other than the therapy were prevalent and predominantly negative (e.g. treatment side effects). Patients reported psycho-spiritual, physical and social benefits. Concern scores improved significantly (P < 0.001) for both therapies; improved well-being was clinically significant for yoga. Evaluations of group complementary therapy services are feasible, can be conducted effectively and have implications for future research. Yoga and TLM may be of benefit in this population. © 2011 Blackwell Publishing Ltd.

  4. Influences of Detection Pinhole and Sample Flow on Thermal Lens Detection in Microfluidic Systems

    NASA Astrophysics Data System (ADS)

    Liu, Mingqiang; Franko, Mladen

    2014-12-01

    Thermal lens microscopy (TLM), due to its high temporal () and spatial resolution (), has been coupled to lab-on-chip chemistry for detection of a variety of compounds in chemical or biological fields. Due to the very short optical path length (usually below 100 ) in a microchip, the sensitivity of TLM is unfortunately still 10 to 100 times lower than conventional TLS with 1 cm sample length. Optimization of the TLM optical configuration was made with respect to different pinhole aperture-to-beam size ratios for the best signal-to-noise ratio. In the static mode, the instrumental noise comes mainly from the shot noise of the probe beam when the chopper frequency is over 1 kHz or from the flicker noise of the probe beam at low frequencies. In the flowing mode, the flow-induced noise becomes dominant when the flow rate is high. At a given flow rate, fluids with a higher density and/or a higher viscosity will cause larger flow-induced noise. As an application, a combined microfluidic flow injection analysis ()-TLM device was developed for rapid determination of pollutants by colorimetric reactions. Hexavalent chromium [Cr(VI)] was measured as a model analyte. Analytical signals for 12 sample injections in 1 min have been recorded by the FIA-TLM. For injections of sub-L samples into the microfluidic stream in a deep microchannel, a limit of detection of was achieved for Cr(VI) in water at 60 mW excitation power.

  5. Early body composition, but not body mass, is associated with future accelerated decline in muscle quality

    PubMed Central

    Chiles Shaffer, Nancy; Gonzalez‐Freire, Marta; Shardell, Michelle D.; Zoli, Marco; Studenski, Stephanie A.; Ferrucci, Luigi

    2017-01-01

    Abstract Background Muscle quality (MQ) or strength‐to‐mass ratio declines with aging, but the rate of MQ change with aging is highly heterogeneous across individuals. The identification of risk factors for accelerated MQ decline may offer clues to identity the underpinning physiological mechanisms and indicate targets for prevention and treatment. Using data from the Baltimore Longitudinal Study of Aging, we tested whether measures of body mass and body composition are associated with differential rates of changes in MQ with aging. Methods Participants included 511 men and women, aged 50 years or older, followed for an average of 4 years (range: 1–8). MQ was operationalized as ratio between knee‐extension isokinetic strength and CT‐thigh muscle cross‐sectional area. Predictors included body mass and body composition measures: weight (kg), body mass index (BMI, kg/m2), dual‐energy x‐ray absorptiometry‐measured total body fat mass (TFM, kg) and lean mass (TLM, kg), and body fatness (TFM/weight). Covariates were baseline age, sex, race, and body height. Results Muscle quality showed a significant linear decline over the time of the follow up (average rate of decline 0.02 Nm/cm2 per year, P < .001). Independent of covariates, neither baseline body weight (P = .756) nor BMI (P = .777) was predictive of longitudinal rate of decline in MQ. Instead, higher TFM and lower TLM at baseline predicted steeper longitudinal decline in MQ (P = .036 and P < .001, respectively). In particular, participants with both high TFM and low TLM at baseline experienced the most dramatic decline compared with those with low TFM and high TLM (about 3% per year vs. 0.5% per year, respectively). Participants in the higher tertile of baseline body fatness presented a significantly faster decline of MQ than the rest of the population (P = .021). Similar results were observed when body mass, TFM, and TLM were modeled as time‐dependent predictors. Conclusions Body composition, but not weight nor BMI, is associated with future MQ decline, suggesting that preventive strategies aimed at maintaining good MQ with aging should specifically target body composition features. PMID:28198113

  6. Contact resistance extraction methods for short- and long-channel carbon nanotube field-effect transistors

    NASA Astrophysics Data System (ADS)

    Pacheco-Sanchez, Anibal; Claus, Martin; Mothes, Sven; Schröter, Michael

    2016-11-01

    Three different methods for the extraction of the contact resistance based on both the well-known transfer length method (TLM) and two variants of the Y-function method have been applied to simulation and experimental data of short- and long-channel CNTFETs. While for TLM special CNT test structures are mandatory, standard electrical device characteristics are sufficient for the Y-function methods. The methods have been applied to CNTFETs with low and high channel resistance. It turned out that the standard Y-function method fails to deliver the correct contact resistance in case of a relatively high channel resistance compared to the contact resistances. A physics-based validation is also given for the application of these methods based on applying traditional Si MOSFET theory to quasi-ballistic CNTFETs.

  7. Membrane Mediated Antimicrobial and Antitumor Activity of Cathelicidin 6: Structural Insights from Molecular Dynamics Simulation on Multi-Microsecond Scale

    PubMed Central

    Sahoo, Bikash Ranjan; Fujiwara, Toshimichi

    2016-01-01

    The cathelicidin derived bovine antimicrobial peptide BMAP27 exhibits an effective microbicidal activity and moderate cytotoxicity towards erythrocytes. Irrespective of its therapeutic and multidimensional potentiality, the structural studies are still elusive. Moreover, the mechanism of BMAP27 mediated pore formation in heterogeneous lipid membrane systems is poorly explored. Here, we studied the effect of BMAP27 in model cell-membrane systems such as zwitterionic, anionic, thymocytes-like (TLM) and leukemia-like membranes (LLM) by performing molecular dynamics (MD) simulation longer than 100 μs. All-atom MD studies revealed a stable helical conformation in the presence of anionic lipids, however, significant loss of helicity was identified in TLM and zwitterionic systems. A peptide tilt (~45˚) and central kink (at residue F10) was found in anionic and LLM models, respectively, with an average membrane penetration of < 0.5 nm. Coarse-grained (CG) MD analysis on a multi-μs scale shed light on the membrane-dependent peptide and lipid organization. Stable micelle and end-to-end like oligomers were formed in zwitterionic and TLM models, respectively. In contrast, unstable oligomer formation and monomeric BMAP27 penetration were observed in anionic and LLM systems with selective anionic lipid aggregation (in LLM). Peptide penetration up to ~1.5 nm was observed in CG-MD systems with the BMAP27 C-terminal oriented towards the bilayer core. Structural inspection suggested membrane penetration by micelle/end-to-end like peptide oligomers (carpet-model like) in the zwitterionic/TLM systems, and transmembrane-mode (toroidal-pore like) in the anionic/LLM systems, respectively. Structural insights and energetic interpretation in BMAP27 mutant highlighted the role of F10 and hydrophobic residues in mediating a membrane-specific peptide interaction. Free energy profiling showed a favorable (-4.58 kcal mol-1 for LLM) and unfavorable (+0.17 kcal mol-1 for TLM) peptide insertion in anionic and neutral systems, respectively. This determination can be exploited to regulate cell-specific BMAP27 cytotoxicity for the development of potential drugs and antibiotics. PMID:27391304

  8. 76 FR 39033 - Airworthiness Directives; Rolls-Royce Deutschland Ltd & Co KG (RRD) BR700-710 Series Turbofan...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-05

    ... limitation section (ALS) of their approved maintenance program (Time Limits Manual (TLM), chapters 05-00-01... airplanes used for pilot training. Revise their ALS of their approved maintenance program (TLM chapters 05... limitations section (ALS) of the operators approved maintenance program (reference the Time Limits Manual (TLM...

  9. Advancing Detached-Eddy Simulation

    DTIC Science & Technology

    2007-01-01

    fluxes leads to an improvement in the stability of the solution . This matrix is solved iteratively using a symmetric Gauss - Seidel procedure. Newtons sub...model (TLM) is a zonal approach, proposed by Balaras and Benocci (5) and Balaras et al. (4). The method involved the solution of filtered Navier...LES mesh. The method was subsequently used by Cabot (6) and Diurno et al. (7) to obtain the solution of the flow over a backward facing step and by

  10. Antitumoral, antihypertensive, antimicrobial, and antioxidant effects of an octanuclear copper(II)-telmisartan complex with an hydrophobic nanometer hole.

    PubMed

    Islas, María S; Martínez Medina, Juan J; López Tévez, Libertad L; Rojo, Teófilo; Lezama, Luis; Griera Merino, Mercedes; Calleros, Laura; Cortes, María A; Rodriguez Puyol, Manuel; Echeverría, Gustavo A; Piro, Oscar E; Ferrer, Evelina G; Williams, Patricia A M

    2014-06-02

    A new Cu(II) complex with the antihypertensive drug telmisartan, [Cu8Tlm16]·24H2O (CuTlm), was synthesized and characterized by elemental analysis and electronic, FTIR, Raman and electron paramagnetic resonance spectroscopy. The crystal structure (at 120 K) was solved by X-ray diffraction methods. The octanuclear complex is a hydrate of but otherwise isostructural to the previously reported [Cu8Tlm16] complex. [Cu8Tlm16]·24H2O crystallizes in the tetragonal P4/ncc space group with a = b = 47.335(1), c = 30.894(3) Å, Z = 4 molecules per unit cell giving a macrocyclic ring with a double helical structure. The Cu(II) ions are in a distorted bipyramidal environment with a somewhat twisted square basis, cis-coordinated at their core N2O2 basis to two carboxylate oxygen and two terminal benzimidazole nitrogen atoms. Cu8Tlm16 has a toroidal-like shape with a hydrophobic nanometer hole, and their crystal packing defines nanochannels that extend along the crystal c-axis. Several biological activities of the complex and the parent ligand were examined in vitro. The antioxidant measurements indicate that the complex behaves as a superoxide dismutase mimic with improved superoxide scavenger power as compared with native sartan. The capacity of telmisartan and its copper complex to expand human mesangial cells (previously contracted by angiotensin II treatment) is similar to each other. The antihypertensive effect of the compounds is attributed to the strongest binding affinity to angiotensin II type 1 receptor and not to the antioxidant effects. The cytotoxic activity of the complex and that of its components was determined against lung cancer cell line A549 and three prostate cancer cell lines (LNCaP, PC-3, and DU 145). The complex displays some inhibitory effect on the A549 line and a high viability decrease on the LNCaP (androgen-sensitive) line. From flow cytometric analysis, an apoptotic mechanism was established for the latter cell line. Telmisartan and CuTlm show antibacterial and antifungal activities in various strains, and CuTlm displays improved activity against the Staphylococcus aureus strain as compared with unbounded copper(II).

  11. Effectiveness of E-TLM in Learning Vocabulary in English

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2011-01-01

    The study enlightens the effectiveness of e-TLM in Learning Vocabulary in English at standard VI. Objectives of the study: 1. To find out the problems of conventional TLM in learning vocabulary in English. 2. To find out the significant difference in achievement mean score between the pre test of control group and the post test of control group.…

  12. Gender-specific association between dietary acid load and total lean body mass and its dependency on protein intake in seniors.

    PubMed

    Faure, A M; Fischer, K; Dawson-Hughes, B; Egli, A; Bischoff-Ferrari, H A

    2017-12-01

    Diet-related mild metabolic acidosis may play a role in the development of sarcopenia. We investigated the relationship between dietary acid load and total lean body mass in male and female seniors age ≥ 60 years. We found that a more alkaline diet was associated with a higher %TLM only among senior women. The aim of this study was to determine if dietary acid load is associated with total lean body mass in male and female seniors age ≥ 60 years. We investigated 243 seniors (mean age 70.3 ± 6.3; 53% women) age ≥ 60 years who participated in the baseline assessment of a clinical trial on vitamin D treatment and rehabilitation after unilateral knee replacement due to severe knee osteoarthritis. The potential renal acid load (PRAL) was assessed based on individual nutrient intakes derived from a food frequency questionnaire. Body composition including percentage of total lean body mass (%TLM) was determined using dual-energy X-ray absorptiometry. Cross-sectional analyses were performed for men and women separately using multivariable regression models controlling for age, physical activity, smoking status, protein intake (g/kg BW per day), energy intake (kcal), and serum 25-hydroxyvitamin D concentration. We included a pre-defined subgroup analysis by protein intake (< 1 g/kg BW day, > 1 g/kg BW day) and by age group (< 70 years, ≥ 70 years). Adjusted %TLM decreased significantly across PRAL quartiles only among women (P trend  = 0.004). Moreover, in subgroup analysis, the negative association between the PRAL and %TLM was most pronounced among women with low protein intake (< 1 g/kg BW per day) and age below 70 years (P = 0.002). Among men, there was no association between the PRAL and %TLM. The association between dietary acid load and %TLM seems to be gender-specific, with a negative impact on total lean mass only among senior women. Therefore, an alkaline diet may be beneficial for preserving total lean mass in senior women, especially in those with low protein intake.

  13. Angiotensin II type 1 receptor blockade by telmisartan prevents stress-induced impairment of memory via HPA axis deactivation and up-regulation of brain-derived neurotrophic factor gene expression.

    PubMed

    Wincewicz, D; Juchniewicz, A; Waszkiewicz, N; Braszko, J J

    2016-09-01

    Physical and psychological aspects of chronic stress continue to be a persistent clinical problem for which new pharmacological treatment strategies are aggressively sought. By the results of our previous work it has been demonstrated that telmisartan (TLM), an angiotensin type 1 receptor (AT1) blocker (ARB) and partial agonist of peroxisome proliferator-activated receptor gamma (PPARγ), alleviates stress-induced cognitive decline. Understanding of mechanistic background of this phenomenon is hampered by both dual binding sites of TLM and limited data on the consequences of central AT1 blockade and PPARγ activation. Therefore, a critical need exists for progress in the characterization of this target for pro-cognitive drug discovery. An unusual ability of novel ARBs to exert various PPARγ binding activities is commonly being viewed as predominant over angiotensin blockade in terms of neuroprotection. Here we aimed to verify this hypothesis using an animal model of chronic psychological stress (Wistar rats restrained 2.5h daily for 21days) with simultaneous oral administration of TLM (1mg/kg), GW9662 - PPARγ receptor antagonist (0.5mg/kg), or both in combination, followed by a battery of behavioral tests (open field, elevated plus maze, inhibitory avoidance - IA, object recognition - OR), quantitative determination of serum corticosterone (CORT) and evaluation of brain-derived neurotrophic factor (BDNF) gene expression in the medial prefrontal cortex (mPFC) and hippocampus (HIP). Stressed animals displayed decreased recall of the IA behavior (p<0.001), decreased OR (p<0.001), substantial CORT increase (p<0.001) and significantly downregulated expression of BDNF in the mPFC (p<0.001), which were attenuated in rats receiving TLM and TLM+GW9662. These data indicate that procognitive effect of ARBs in stressed subjects do not result from PPAR-γ activation, but AT1 blockade and subsequent hypothalamus-pituitary-adrenal axis deactivation associated with changes in primarily cortical gene expression. This study confirms the dual activities of TLM that controls hypertension and cognition through AT1 blockade. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Mode-mismatched confocal thermal-lens microscope with collimated probe beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabrera, Humberto, E-mail: hcabrera@ictp.it; Centro Multidisciplinartio de Ciencias, Instituto Venezolano de Investigaciones Científicas; Korte, Dorota

    2015-05-15

    We report a thermal lens microscope (TLM) based on an optimized mode-mismatched configuration. It takes advantage of the coaxial counter propagating tightly focused excitation and collimated probe beams, instead of both focused at the sample, as it is in currently known TLM setups. A simple mathematical model that takes into account the main features of the instrument is presented. The confocal detection scheme and the introduction of highly collimated probe beam allow enhancing the versatility, limit of detection (LOD), and sensitivity of the instrument. The theory is experimentally verified measuring ethanol’s absorption coefficient at 532.8 nm. Additionally, the presented techniquemore » is applied for detection of ultra-trace amounts of Cr(III) in liquid solution. The achieved LOD is 1.3 ppb, which represents 20-fold enhancement compared to transmission mode spectrometric techniques and a 7.5-fold improvement compared to previously reported methods for Cr(III) based on thermal lens effect.« less

  15. Creep grazing and early weaning effects on cow and calf productivity.

    PubMed

    Harvey, R W; Burns, J C

    1988-05-01

    One hundred fifty Simmental-Hereford cows and calves were used in a 3-yr study to evaluate three creep grazing treatments and an early weaning treatment on cow and calf performance during midsummer (July to September). Calves were approximately 150 d of age and averaged 178.6 kg when treatments were initiated. Tifleaf pearl millet (Pennisetum Americanum L. Leeke) was used as the forage for two of the creep treatments, representing two cow stocking intensities of .466 (TLM1) and .239 (TLM2) ha of base hill land pasture/cow, and as pasture for early weaned calves. A red clover (Trifolium pratense L.)-Kentucky bluegrass (Poa pratensis L.) mixture was used as the other creep forage. Hill land pastures were similar for the mature cow units in all creep treatments. Calf average daily gains ranged from .93 to 1.10 kg and were not influenced (P greater than .05) by treatment. Calf gains per hectare were similar for the control, red clover and TLM1 treatments. The TLM2 and early weaning treatments resulted in increases of 105.4 and 39.1 kg of calf gain/ha (P less than .05) compared with the control. When calves were allowed to creep graze millet, decreasing the forage area from .466 to .239 ha per cow-calf unit resulted in an increase of 97.7 kg of calf gain/ha with no reduction in calf gain. Cows on the more intensively grazed millet creep treatment (TLM2) lost more weight (P less than .05) during midsummer than those on the TLM1 treatment, but they gained 18.5 kg more (P less than .10) weight than TLM1 cows between weaning and the start of winter feeding.

  16. Assessing Aromatic-Hydrocarbon Toxicity to Fish Early Life Stages Using Passive-Dosing Methods and Target-Lipid and Chemical-Activity Models.

    PubMed

    Butler, Josh D; Parkerton, Thomas F; Redman, Aaron D; Letinski, Daniel J; Cooper, Keith R

    2016-08-02

    Aromatic hydrocarbons (AH) are known to impair fish early life stages (ELS). However, poorly defined exposures often confound ELS-test interpretation. Passive dosing (PD) overcomes these challenges by delivering consistent, controlled exposures. The objectives of this study were to apply PD to obtain 5 d acute embryo lethality and developmental data and 30 d chronic embryo-larval survival and growth-effects data using zebrafish with different AHs; to analyze study and literature toxicity data using target-lipid (TLM) and chemical-activity (CA) models; and to extend PD to a mixture and test the assumption of AH additivity. PD maintained targeted exposures over a concentration range of 6 orders of magnitude. AH toxicity increased with log Kow up to pyrene (5.2). Pericardial edema was the most sensitive sublethal effect that often preceded embryo mortality, although some AHs did not produce developmental effects at concentrations causing mortality. Cumulative embryo-larval mortality was more sensitive than larval growth, with acute-to-chronic ratios of <10. More-hydrophobic AHs did not exhibit toxicity at aqueous saturation. The relationship and utility of the TLM-CA models for characterizing fish ELS toxicity is discussed. Application of these models indicated that concentration addition provided a conservative basis for predicting ELS effects for the mixture investigated.

  17. Transmission Line Modeling Applied to Hot Corrosion of Fe-40at.pctAl in Molten LiCl-KCl

    NASA Astrophysics Data System (ADS)

    Barraza-Fierro, Jesus Israel; Espinosa-Medina, Marco Antonio; Castaneda, Homero

    2015-12-01

    The effect of Cu and Li additions to the intermetallic alloy Fe-40at.pctAl on the corrosion performance in an LiCl-55wtpctKCl molten eutectic salt was studied by means of electrochemical impedance spectroscopy, transmission line modeling (TLM), and cathodic polarization. The tests were done at 723 K, 773 K, and 823 K (450 °C, 500 °C, and 550 °C), for 60 and 720 minutes. The element additions could improve the corrosion resistance of Fe-40at.pctAl in molten LiCl-KCl, while TLM could characterize and quantify the interfacial processes in hot corrosion. The polarization curves helped to establish the possible cathodic reactions in the experimental conditions.

  18. Microfluidic Flow Injection Analysis with Thermal Lens Microscopic Detection for Determination of NGAL

    NASA Astrophysics Data System (ADS)

    Radovanović, Tatjana; Liu, Mingqiang; Likar, Polona; Klemenc, Matjaž; Franko, Mladen

    2015-06-01

    A combined microfluidic flow injection analysis-thermal lens microscopy (FIA-TLM) system was applied for determination of neutrophil gelatinase-associated lipocalin (NGAL)—a biomarker of acute kidney injury. NGAL was determined following a commercial ELISA assay and transfer of the resulting solution into the FIA-TLM system with a 100 m deep microchannel. At an excitation power of 100 mW, the FIA-TLM provided about seven times lower limits of detection (1.5 pg as compared to a conventional ELISA test, and a sample throughput of six samples per minute, which compares favorably with sample throughput of the microtiter plate reader, which reads 96 wells in about 30 min. Comparison of results for NGAL in plasma samples from healthy individuals and for NGAL dynamics in patients undergoing coronary angiography measured with transmission mode spectrometry on a microtiter plate reader and with FIA-TLM showed good agreement. In addition to improved LOD, the high sensitivity of FIA-TLM offers possibilities of a further reduction of the total reaction time of the NGAL ELISA test by sacrificing some of the sensitivity while reducing the duration of individual incubation steps.

  19. AC impedance study of degradation of porous nickel battery electrodes

    NASA Technical Reports Server (NTRS)

    Lenhart, Stephen J.; Macdonald, D. D.; Pound, B. G.

    1987-01-01

    AC impedance spectra of porous nickel battery electrodes were recorded periodically during charge/discharge cycling in concentrated KOH solution at various temperatures. A transmission line model (TLM) was adopted to represent the impedance of the porous electrodes, and various model parameters were adjusted in a curve fitting routine to reproduce the experimental impedances. Degradation processes were deduced from changes in model parameters with electrode cycling time. In developing the TLM, impedance spectra of planar (nonporous) electrodes were used to represent the pore wall and backing plate interfacial impedances. These data were measured over a range of potentials and temperatures, and an equivalent circuit model was adopted to represent the planar electrode data. Cyclic voltammetry was used to study the characteristics of the oxygen evolution reaction on planar nickel electrodes during charging, since oxygen evolution can affect battery electrode charging efficiency and ultimately electrode cycle life if the overpotential for oxygen evolution is sufficiently low.

  20. Tubular lipid membranes pulled from vesicles: Dependence of system equilibrium on lipid bilayer curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golushko, I. Yu., E-mail: vaniagolushko@yandex.ru; Rochal, S. B.

    2016-01-15

    Conditions of joint equilibrium and stability are derived for a spherical lipid vesicle and a tubular lipid membrane (TLM) pulled from this vesicle. The obtained equations establish relationships between the geometric and physical characteristics of the system and the external parameters, which have been found to be controllable in recent experiments. In particular, the proposed theory shows that, in addition to the pressure difference between internal and external regions of the system, the variable spontaneous average curvature of the lipid bilayer (forming the TLM) also influences the stability of the lipid tube. The conditions for stability of the cylindrical phasemore » of TLMs after switching off the external force that initially formed the TLM from a vesicle are discussed. The loss of system stability under the action of a small axial force compressing the TLM is considered.« less

  1. The use of transmission line modelling to test the effectiveness of I-kaz as autonomous selection of intrinsic mode function

    NASA Astrophysics Data System (ADS)

    Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; PiRemli, M. A.; Karollah, B.; Rusman

    2017-10-01

    Pressure transient signal occurred due to sudden changes in fluid propagation filled in pipelines system, which is caused by rapid pressure and flow fluctuation in a system, such as closing and opening valve rapidly. The application of Hilbert-Huang Transform (HHT) as the method to analyse the pressure transient signal utilised in this research. However, this method has the difficulty in selecting the suitable IMF for the further post-processing, which is Hilbert Transform (HT). This paper proposed the implementation of Integrated Kurtosis-based Algorithm for z-filter Technique (I-kaz) to kurtosis ratio (I-kaz-Kurtosis) for that allows automatic selection of intrinsic mode function (IMF) that’s should be used. This work demonstrated the synthetic pressure transient signal generates using transmission line modelling (TLM) in order to test the effectiveness of I-kaz as the autonomous selection of intrinsic mode function (IMF). A straight fluid network was designed using TLM fixing with higher resistance at some point act as a leak and connecting to the pipe feature (junction, pipefitting or blockage). The analysis results using I-kaz-kurtosis ratio revealed that the method can be utilised as an automatic selection of intrinsic mode function (IMF) although the noise level ratio of the signal is lower. I-kaz-kurtosis ratio is recommended and advised to be implemented as automatic selection of intrinsic mode function (IMF) through HHT analysis.

  2. 76 FR 68663 - Airworthiness Directives; Rolls-Royce plc (RR) RB211-Trent 800 Series Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-07

    ... limitations section (ALS) * * * Time Limits manual (TLM) dated June 15, 2009'' to ``(1) Revise the airworthiness limitations section (ALS) * * * Time Limits manual (TLM) dated no earlier than June 15, 2009...

  3. Micro-scale abrasive wear behavior of medical implant material Ti-25Nb-3Mo-3Zr-2Sn alloy on various friction pairs.

    PubMed

    Wang, Zhenguo; Huang, Weijiu; Ma, Yanlong

    2014-09-01

    The micro-scale abrasion behaviors of surgical implant materials have often been reported in the literature. However, little work has been reported on the micro-scale abrasive wear behavior of Ti-25Nb-3Mo-3Zr-2Sn (TLM) titanium alloy in simulated body fluids, especially with respect to friction pairs. Therefore, a TE66 Micro-Scale Abrasion Tester was used to study the micro-scale abrasive wear behavior of the TLM alloy. This study covers the friction coefficient and wear loss of the TLM alloy induced by various friction pairs. Different friction pairs comprised of ZrO2, Si3N4 and Al2O3 ceramic balls with 25.4mm diameters were employed. The micro-scale abrasive wear mechanisms and synergistic effect between corrosion and micro-abrasion of the TLM alloy were investigated under various wear-corrosion conditions employing an abrasive, comprised of SiC (3.5 ± 0.5 μm), in two test solutions, Hanks' solution and distilled water. Before the test, the specimens were heat treated at 760°C/1.0/AC+550°C/6.0/AC. It was discovered that the friction coefficient values of the TLM alloy are larger than those in distilled water regardless of friction pairs used, because of the corrosive Hanks' solution. It was also found that the value of the friction coefficient was volatile at the beginning of wear testing, and it became more stable with further experiments. Because the ceramic balls have different properties, especially with respect to the Vickers hardness (Hv), the wear loss of the TLM alloy increased as the ball hardness increased. In addition, the wear loss of the TLM alloy in Hanks' solution was greater than that in distilled water, and this was due to the synergistic effect of micro-abrasion and corrosion, and this micro-abrasion played a leading role in the wear process. The micro-scale abrasive wear mechanism of the TLM alloy gradually changed from two-body to mixed abrasion and then to three-body abrasion as the Vickers hardness of the balls increased. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Study of the adsorption of Cd and Zn onto an activated carbon: Influence of pH, cation concentration, and adsorbent concentration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seco, A.; Marzal, P.; Gabaldon, C.

    1999-06-01

    The single adsorption of Cd and Zn from aqueous solutions has been investigated on Scharlau Ca 346 granular activated carbon in a wide range of experimental conditions: pH, metal concentration, and carbon concentration. The results showed the efficiency of the activated carbon as sorbent for both metals. Metal removals increase on raising the pH and carbon concentration, and decrease on raising the initial metal concentration. The adsorption processes have been modeled using the surface complex formation (SCF) Triple Layer Model (TLM). The adsorbent TLM parameters were determined. Modeling has been performed assuming a single surface bidentate species or an overallmore » surface species with fractional stoichiometry. The bidentate stoichiometry successfully predicted cadmium and zinc removals in all the experimental conditions. The Freundlich isotherm has been also checked.« less

  5. Transoral laser microsurgery for laryngeal cancer: A primer and review of laser dosimetry

    PubMed Central

    Rubinstein, Marc

    2010-01-01

    Transoral laser microsurgery (TLM) is an emerging technique for the management of laryngeal and other head and neck malignancies. It is increasingly being used in place of traditional open surgery because of lower morbidity and improved organ preservation. Since the surgery is performed from the inside working outward as opposed to working from the outside in, there is less damage to the supporting structures that lie external to the tumor. Coupling the laser to a micromanipulator and a microscope allows precise tissue cutting and hemostasis; thereby improving visualization and precise ablation. The basic approach and principles of performing TLM, the devices currently in use, and the associated dosimetry parameters will be discussed. The benefits of using TLM over conventional surgery, common complications and the different settings used depending on the location of the tumor will also be discussed. Although the CO2 laser is the most versatile and the best-suited laser for TLM applications, a variety of lasers and different parameters are used in the treatment of laryngeal cancer. Improved instrumentation has lead to an increased utilization of TLM by head and neck cancer surgeons and has resulted in improved outcomes. Laser energy levels and spot size are adjusted to vary the precision of cutting and amount of hemostasis obtained. PMID:20835840

  6. High fat diet induced atherosclerosis is accompanied with low colonic bacterial diversity and altered abundances that correlates with plaque size, plasma A-FABP and cholesterol: a pilot study of high fat diet and its intervention with Lactobacillus rhamnosus GG (LGG) or telmisartan in ApoE-/- mice.

    PubMed

    Chan, Yee Kwan; Brar, Manreetpal Singh; Kirjavainen, Pirkka V; Chen, Yan; Peng, Jiao; Li, Daxu; Leung, Frederick Chi-Ching; El-Nezami, Hani

    2016-11-08

    Atherosclerosis appears to have multifactorial causes - microbial component like lipopolysaccharides (LPS) and other pathogen associated molecular patterns may be plausible factors. The gut microbiota is an ample source of such stimulants, and its dependent metabolites and altered gut metagenome has been an established link to atherosclerosis. In this exploratory pilot study, we aimed to elucidate whether microbial intervention with probiotics L. rhamnosus GG (LGG) or pharmaceuticals telmisartan (TLM) could improve atherosclerosis in a gut microbiota associated manner. Atherosclerotic phenotype was established by 12 weeks feeding of high fat (HF) diet as opposed to normal chow diet (ND) in apolipoprotein E knockout (ApoE -/- ) mice. LGG or TLM supplementation to HF diet was studied. Both LGG and TLM significantly reduced atherosclerotic plaque size and improved various biomarkers including endotoxin to different extents. Colonial microbiota analysis revealed that TLM restored HF diet induced increase in Firmicutes/Bacteroidetes ratio and decrease in alpha diversity; and led to a more distinct microbial clustering closer to ND in PCoA plot. Eubacteria, Anaeroplasma, Roseburia, Oscillospira and Dehalobacteria appeared to be protective against atherosclerosis and showed significant negative correlation with atherosclerotic plaque size and plasma adipocyte - fatty acid binding protein (A-FABP) and cholesterol. LGG and TLM improved atherosclerosis with TLM having a more distinct alteration in the colonic gut microbiota. Altered bacteria genera and reduced alpha diversity had significant correlations to atherosclerotic plaque size, plasma A-FABP and cholesterol. Future studies on such bacterial functional influence in lipid metabolism will be warranted.

  7. Transoral laser microsurgery for oral squamous cell carcinoma: Oncologic outcomes and prognostic factors

    PubMed Central

    Sinha, Parul; Hackman, Trevor; Nussenbaum, Brian; Wu, Ningying; Lewis, James S.; Haughey, Bruce H.

    2014-01-01

    Background Modest survival rates are published for treatment of oral squamous cell carcinoma (OSCC) using conventional approaches. Few cohort studies are available for transoral resection of OSCC. Methods Analysis for recurrence, survival, and prognosis of patients with OSCC treated with transoral laser microsurgery (TLM) ± neck dissection was obtained from a prospective database. Results Ninety-five patients (71 patients had stages T1–T2 and 24 had stages T3–T4 disease) with minimum follow-up of 24 months met criteria and demonstrated negative margins in 95%. Five-year local control (LC) and disease-specific survival (DSS) were 78% and 76%, respectively. Surgical salvage achieved an absolute final locoregional control of 92%. Immune compromise and final margins were prognostic for LC, whereas T classification, N classification, TNM stage, comorbidity, and perineural invasion were also significant for DSS. Conclusion We document a large series of patients with OSCC treated with TLM, incorporating T1 to T4 primaries. A significant proportion of stage III/IV cases demonstrates feasibility of TLM in higher stages, with final margin positivity of 5%, LC greater than 90%, and comparable survival outcomes. PMID:23729304

  8. Transoral Laser Microsurgery in Early Glottic Lesions.

    PubMed

    Sjögren, E V

    2017-01-01

    To give an overview of the evolvement of transoral laser microsurgery (TLM) in the treatment of early glottic carcinoma and highlight the contribution of recent literature. The indications and limits of TLM have been well specified. Effects on swallowing have been well documented. Introduction of narrow-band imaging (NBI) and diffusion-weighted magnetic resonance has been shown of additional value for outcome. The first reports on transoral robotic surgery show that it may be of added value in the future. TLM for early glottic carcinoma (Tis-T2) has very good oncological outcomes with indications of higher larynx preservation in TLM than that in radiotherapy. The anterior commissure is a risk factor if involved in the cranio-caudal plane, and reduced vocal fold mobility is a risk factor when this is due to arytenoid involvement. The best voice results are achieved when the anterior commissure can be left intact along with part of the vocal fold muscle although even in larger resections, patient self-reported voice handicap is still limited.

  9. Trumpet Laminectomy Microdecompression for Lumbal Canal Stenosis

    PubMed Central

    Yasuda, Muneyoshi; Arifin, Muhammad Zafrullah; Takayasu, Masakazu; Faried, Ahmad

    2014-01-01

    Microsurgery techniques are useful innovations towards minimizing the insult of canal stenosis. Here, we describe the trumpet laminectomy microdecompression (TLM) technique, advantages and disadvantages. Sixty-two TLM patients with lumbar disc herniation, facet hypertrophy or yellow ligament or intracanal granulation tissue. The symptoms are low back pain, dysesthesia and severe pain on both legs. Spine levels operated Th11-S1; the patients who had trumpet-type fenestration, 62.9% had hypertrophy of the facet joint, 11.3% had intracanal granulation tissue, 79.1% had hypertrophy of the yellow ligament and 64.5% had disc herniation. The average of procedure duration was 68.9 min and intraoperative blood loss was 47.4 mL. Intraoperative complications were found in 3.2% of patients, with dural damage but without cerebrospinal fluid leakage. The TLM can be performed for all ages and all levels of spinal canal stenosis, without the complication of spondilolistesis. The TLM has a shorter duration, with minimal intraoperative blood loss. PMID:25346821

  10. Antioxidant activity, total phenolic and total flavonoid contents of whole plant extracts Torilis leptophylla L

    PubMed Central

    2012-01-01

    Background The aim of this study was to screen various solvent extracts of whole plant of Torilis leptophylla to display potent antioxidant activity in vitro and in vivo, total phenolic and flavonoid contents in order to find possible sources for future novel antioxidants in food and pharmaceutical formulations. Material and methods A detailed study was performed on the antioxidant activity of the methanol extract of whole plant of Torilis leptophylla (TLM) and its derived fractions {n-hexane (TLH), chloroform (TLC) ethyl acetate (TLE) n-butanol (TLB) and residual aqueous fraction (TLA)} by in vitro chemical analyses and carbon tetrachloride (CCl4) induced hepatic injuries (lipid peroxidation and glutathione contents) in male Sprague-Dawley rat. The total yield, total phenolic (TPC) and total flavonoid contents (TFC) of all the fractions were also determined. TLM was also subjected to preliminary phytochemical screening test for various constituents. Results The total phenolic contents (TPC) (121.9±3.1 mg GAE/g extract) of TLM while total flavonoid contents (TFC) of TLE (60.9 ±2.2 mg RTE/g extract) were found significantly higher as compared to other solvent fractions. Phytochemical screening of TLM revealed the presence of alkaloids, anthraquinones, cardiac glycosides, coumarins, flavonoids, saponins, phlobatannins, tannins and terpenoids. The EC50 values based on the DPPH (41.0±1 μg/ml), ABTS (10.0±0.9 μg/ml) and phosphomolybdate (10.7±2 μg/ml) for TLB, hydroxyl radicals (8.0±1 μg/ml) for TLC, superoxide radicals (57.0±0.3 μg/ml) for TLM and hydrogen peroxide radicals (68.0±2 μg/ml) for TLE were generally lower showing potential antioxidant properties. A significant but marginal positive correlation was found between TPC and EC50 values for DPPH, hydroxyl, phosphomolybdate and ABTS, whereas another weak and positive correlation was determined between TFC and EC50 values for superoxide anion and hydroxyl radicals. Results of in vivo experiment revealed that administration of CCl4 caused a significant increase in lipid peroxidation (TBARS) while decrease in GSH contents of liver. In contrast, TLM (200 mg/kg bw) and silymarin (50 mg/kg bw) co-treatment effectively prevented these alterations and maintained the antioxidant status. Conclusion Data from present results revealed that Torilis leptophylla act as an antioxidant agent due to its free radical scavenging and cytoprotective activity. PMID:23153304

  11. Photoelectric return-stroke velocity and peak current estimates in natural and triggered lightning

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Rust, W. David

    1989-01-01

    Two-dimensional photoelectric return stroke velocities from 130 strokes are presented, including 86 negative natural, 41 negative triggered, one positive triggered, and two positive natural return strokes. For strokes starting near the ground and exceeding 500 m in length, the average velocity is 1.3 + or - 0.3 X 10 to the 8th m/s for natural return strokes and 1.2 + or - 0.3 X 10 to the 8th m/s for triggered return strokes. For strokes with lengths less than 500 m, the average velocities are slightly higher. Using the transmission line model (TLM), the shortest segment one-dimensional return stroke velocity, and either the maximum or plateau electric field, it is shown that natural strokes have a peak current distribution that is lognormal with a median value of 16 kA (maximum E) or 12 kA (plateau E). Triggered lightning has a medium peak current value of 21 kA (maximum E) or 15 kA (plateau E). Correlations are found between TLM peak currents and velocities for triggered and natural subsequent return strokes, but not between TLM peak currents and natural first return stroke velocities.

  12. Beneficial Effects of cART Initiated during Primary and Chronic HIV-1 Infection on Immunoglobulin-Expression of Memory B-Cell Subsets

    PubMed Central

    Pensieroso, Simone; Tolazzi, Monica; Chiappetta, Stefania; Nozza, Silvia; Lazzarin, Adriano; Tambussi, Giuseppe; Scarlatti, Gabriella

    2015-01-01

    Introduction During HIV-1 infection the B-cell compartment undergoes profound changes towards terminal differentiation, which are only partially restored by antiretroviral therapy (cART). Materials and Methods To investigate the impact of infection as early as during primary HIV-1 infection (PHI) we assessed distribution of B-cell subsets in 19 PHI and 25 chronic HIV-1-infected (CHI) individuals before and during 48 weeks of cART as compared to healthy controls (n = 23). We also analysed Immunoglobulin-expression of memory B-cell subsets to identify alterations in Immunoglobulin-maturation. Results Determination of B-cell subsets at baseline showed that total and Naive B-cells were decreased whereas Activated Memory (AM), Tissue-like Memory (TLM) B-cells and Plasma cells were increased in both PHI and CHI patients. After 4 weeks of cART total B-cells increased, while AM, TLM B-cells and Plasma cells decreased, although without reaching normal levels in either group of individuals. This trend was maintained until week 48, though only total B-cells normalized in both PHI and CHI. Resting Memory (RM) B-cells were preserved since baseline. This subset remained stable in CHI, while was expanded by an early initiation of cART during PHI. Untreated CHI patients showed IgM-overexpression at the expenses of switched (IgM-IgD-) phenotypes of the memory subsets. Interestingly, in PHI patients a significant alteration of Immunoglobulin-expression was evident at BL in TLM cells, and after 4 weeks, despite treatment, in AM and RM subsets. After 48 weeks of therapy, Immunoglobulin-expression of AM and RM almost normalized, but remained perturbed in TLM cells in both groups. Conclusions In conclusion, aberrant activated and exhausted B-cell phenotypes rose already during PHI, while most of the alterations in Ig-expression seen in CHI appeared later, despite 4 weeks of effective cART. After 48 weeks of cART B-cell subsets distribution improved although without full normalization, while Immunoglobulin-expression normalized among AM and RM, remaining perturbed in TLM B-cells of PHI and CHI. PMID:26474181

  13. Surface complexation modeling of zinc sorption onto ferrihydrite.

    PubMed

    Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

    2004-02-01

    A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength dependence of sorption. The results of this research and previous work with Pb(II) indicate that the existing thermodynamic framework for the modified TLM is able to reproduce the metal sorption data only over a limited range of conditions. For this reason, much work still needs to be done in fine-tuning the thermodynamic framework and databases for the TLM.

  14. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    PubMed

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  15. Pancreas Oxygen Persufflation Increases ATP Levels as Shown by Nuclear Magnetic Resonance

    PubMed Central

    Scott, W.E.; Weegman, B.P.; Ferrer-Fabrega, J.; Stein, S.A.; Anazawa, T.; Kirchner, V.A.; Rizzari, M.D.; Stone, J.; Matsumoto, S.; Hammer, B.E.; Balamurugan, A.N.; Kidder, L.S.; Suszynski, T.M.; Avgoustiniatos, E.S.; Stone, S.G.; Tempelman, L.A.; Sutherland, D.E.R.; Hering, B.J.; Papas, K.K.

    2010-01-01

    Background Islet transplantation is a promising treatment for type 1 diabetes. Due to a shortage of suitable human pancreata, high cost, and the large dose of islets presently required for long-term diabetes reversal; it is important to maximize viable islet yield. Traditional methods of pancreas preservation have been identified as suboptimal due to insufficient oxygenation. Enhanced oxygen delivery is a key area of improvement. In this paper, we explored improved oxygen delivery by persufflation (PSF), ie, vascular gas perfusion. Methods Human pancreata were obtained from brain-dead donors. Porcine pancreata were procured by en bloc viscerectomy from heparinized donation after cardiac death donors and were either preserved by either two-layer method (TLM) or PSF. Following procurement, organs were transported to a 1.5-T magnetic resonance (MR) system for 31P nuclear magnetic resonance spectroscopy to investigate their bioenergetic status by measuring the ratio of adenosine triphosphate to inorganic phosphate (ATP:Pi) and for assessing PSF homogeneity by MRI. Results Human and porcine pancreata can be effectively preserved by PSF. MRI showed that pancreatic tissue was homogeneously filled with gas. TLM can effectively raise ATP:Pi levels in rat pancreata but not in larger porcine pancreata. ATP:Pi levels were almost undetectable in porcine organs preserved with TLM. When human or porcine organs were preserved by PSF, ATP:Pi was elevated to levels similar to those observed in rat pancreata. Conclusion The methods developed for human and porcine pancreas PSF homogeneously deliver oxygen throughout the organ. This elevates ATP levels during preservation and may improve islet isolation outcomes while enabling the use of marginal donors, thus expanding the usable donor pool. PMID:20692395

  16. Contributions of Body-Composition Characteristics to Critical Power and Anaerobic Work Capacity.

    PubMed

    Byrd, M Travis; Switalla, Jonathan Robert; Eastman, Joel E; Wallace, Brian J; Clasey, Jody L; Bergstrom, Haley C

    2018-02-01

    Critical power (CP) and anaerobic work capacity (AWC) from the CP test represent distinct parameters related to metabolic characteristics of the whole body and active muscle tissue, respectively. To examine the contribution of whole-body composition characteristics and local lean mass to further elucidate the differences in metabolic characteristics between CP and AWC as they relate to whole body and local factors. Fifteen anaerobically trained men were assessed for whole-body (% body fat and mineral-free lean mass [LBM]) and local mineral-free thigh lean mass (TLM) composition characteristics. CP and AWC were determined from the 3-min all-out CP test. Statistical analyses included Pearson product-moment correlations and stepwise multiple-regression analyses (P ≤ .05). Only LBM contributed significantly to the prediction of CP (CP = 2.3 [LBM] + 56.7 [r 2  = .346, standard error of the estimate (SEE) = 31.4 W, P = .021]), and only TLM to AWC (AWC = 0.8 [TLM] + 3.7 [r 2  = .479, SEE = 2.2 kJ, P = .004]). The aerobic component (CP) of the CP test was most closely related to LBM, and the anaerobic component (AWC) was more closely related to the TLM. These findings support the theory that CP and AWC are separate measures of whole-body metabolic capabilities and the energy stores in the activated local muscle groups, respectively. Thus, training programs to improve CP and AWC should be designed to include resistance-training exercises to increase whole-body LBM and local TLM.

  17. TLM-Quant: an open-source pipeline for visualization and quantification of gene expression heterogeneity in growing microbial cells.

    PubMed

    Piersma, Sjouke; Denham, Emma L; Drulhe, Samuel; Tonk, Rudi H J; Schwikowski, Benno; van Dijl, Jan Maarten

    2013-01-01

    Gene expression heterogeneity is a key driver for microbial adaptation to fluctuating environmental conditions, cell differentiation and the evolution of species. This phenomenon has therefore enormous implications, not only for life in general, but also for biotechnological applications where unwanted subpopulations of non-producing cells can emerge in large-scale fermentations. Only time-lapse fluorescence microscopy allows real-time measurements of gene expression heterogeneity. A major limitation in the analysis of time-lapse microscopy data is the lack of fast, cost-effective, open, simple and adaptable protocols. Here we describe TLM-Quant, a semi-automatic pipeline for the analysis of time-lapse fluorescence microscopy data that enables the user to visualize and quantify gene expression heterogeneity. Importantly, our pipeline builds on the open-source packages ImageJ and R. To validate TLM-Quant, we selected three possible scenarios, namely homogeneous expression, highly 'noisy' heterogeneous expression, and bistable heterogeneous expression in the Gram-positive bacterium Bacillus subtilis. This bacterium is both a paradigm for systems-level studies on gene expression and a highly appreciated biotechnological 'cell factory'. We conclude that the temporal resolution of such analyses with TLM-Quant is only limited by the numbers of recorded images.

  18. The study of muscle remodeling in Drosophila metamorphosis using in vivo microscopy and bioimage informatics

    PubMed Central

    2012-01-01

    Background Metamorphosis in insects transforms the larval into an adult body plan and comprises the destruction and remodeling of larval and the generation of adult tissues. The remodeling of larval into adult muscles promises to be a genetic model for human atrophy since it is associated with dramatic alteration in cell size. Furthermore, muscle development is amenable to 3D in vivo microscopy at high cellular resolution. However, multi-dimensional image acquisition leads to sizeable amounts of data that demand novel approaches in image processing and analysis. Results To handle, visualize and quantify time-lapse datasets recorded in multiple locations, we designed a workflow comprising three major modules. First, the previously introduced TLM-converter concatenates stacks of single time-points. The second module, TLM-2D-Explorer, creates maximum intensity projections for rapid inspection and allows the temporal alignment of multiple datasets. The transition between prepupal and pupal stage serves as reference point to compare datasets of different genotypes or treatments. We demonstrate how the temporal alignment can reveal novel insights into the east gene which is involved in muscle remodeling. The third module, TLM-3D-Segmenter, performs semi-automated segmentation of selected muscle fibers over multiple frames. 3D image segmentation consists of 3 stages. First, the user places a seed into a muscle of a key frame and performs surface detection based on level-set evolution. Second, the surface is propagated to subsequent frames. Third, automated segmentation detects nuclei inside the muscle fiber. The detected surfaces can be used to visualize and quantify the dynamics of cellular remodeling. To estimate the accuracy of our segmentation method, we performed a comparison with a manually created ground truth. Key and predicted frames achieved a performance of 84% and 80%, respectively. Conclusions We describe an analysis pipeline for the efficient handling and analysis of time-series microscopy data that enhances productivity and facilitates the phenotypic characterization of genetic perturbations. Our methodology can easily be scaled up for genome-wide genetic screens using readily available resources for RNAi based gene silencing in Drosophila and other animal models. PMID:23282138

  19. Non-gravimetric contributions to QCR sensor response.

    PubMed

    Lucklum, Ralf

    2005-11-01

    Quartz crystal resonator (QCR) sensors are commonly known as mass sensitive devices, usually called QCM (Quartz Crystal Microbalance). This constricted view should not be applied to biosensor applications. In many cases the sensor response is strongly influenced or even governed by non-gravimetric effects; the QCR sensor does not act as a microbalance. For better understanding of the sensor response as well as for sensor optimization a more general description of the sensor principle is required. The Transmission Line Model (TLM) is a powerful tool to describe the transduction scheme of QCR and other acoustic-wave based sensors. It is therefore applied to the analysis of the sensor behavior under several conditions, which can be expected in biochemical experiments. The generalization of acoustic parameters provides a concept to overcome some of the limiting assumptions of the present TLM.

  20. 75 FR 45560 - Airworthiness Directives; Rolls-Royce plc (RR) RB211-Trent 800 Series Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-03

    ... limitations section (ALS) of your instructions for continued airworthiness (ICA) to incorporate Task 05-10-01... ALS of your ICA by incorporating any revision of the Rolls-Royce Trent 800 TLM dated prior to the June... ALS of the ICA, any revision of the Rolls-Royce Trent 800 TLM earlier than the June 15, 2009. Other...

  1. The application of electrochemical impedance spectroscopy for characterizing the degradation of Ni(OH)2/NiOOH electrodes

    NASA Technical Reports Server (NTRS)

    Macdonald, D. D.; Pound, B. G.; Lenhart, S. J.

    1989-01-01

    Electrochemical impedance spectra of rolled and bonded and sintered porous nickel battery electrodes were recorded periodically during charge/discharge cycling in concentrated KOH solution at various temperatures. A transmission line model (TLM) was adopted to represent the impedance of the porous electrodes, and various model parameters were adjusted in a curve fitting routine to reproduce the experimental impedances. Degradation processes for rolled and bonded electrodes were deduced from changes in model parameters with electrode cycling time. In developing the TLM, impedance spectra of planar (non-porous) electrodes were used to represent the pore wall and backing plate interfacial impedances. These data were measured over a range of potentials and temperatures, and an equivalent circuit model was adopted to represent the planar electrode data. Cyclic voltammetry was used to study the characteristics of the oxygen evolution reaction on planar nickel electrodes during charging, since oxygen evolution can affect battery electrode charging efficiency and ultimately electrode cycle life if the overpotential for oxygen evolution is sufficiently low. Transmission line modeling results suggest that porous rolled and bonded nickel electrodes undergo restructuring during charge/discharge cycling prior to failure.

  2. Trap-induced charge transfer/transport at energy harvesting assembly

    NASA Astrophysics Data System (ADS)

    Cho, Seongeun; Paik, Hanjong; Kim, Tae Wan; Park, Byoungnam

    2017-02-01

    Understanding interfacial electronic properties between electron donors and acceptors in hybrid optoelectronic solar cells is crucial in governing the device parameters associated with energy harvesting. To probe the electronic localized states at an electron donor/acceptor interface comprising a representative hybrid solar cell, we investigated the electrical contact properties between Al-doped zinc oxide (AZO) and poly (3-hexylthiophene) (P3HT) using AZO as the source and drain electrodes, pumping carriers from AZO into P3HT. The injection efficiency was evaluated using the transmission line method (TLM) in combination with field effect transistor characterizations. Highly conductive AZO films worked as the source and drain electrodes in the devices for TLM and field effect measurements. A comparable contact resistance difference between AZO/P3HT/AZO and Au/P3HT/Au structures contradicts the fact that a far larger energy barrier exists for electrons and holes between AZO and P3HT compared with between P3HT and Au based on the Schottky-Mott model. It is suggested that band to band tunneling accounts for the contradiction through the initial hop from AZO to P3HT for hole injection. The involvement of the tunneling mechanism in determining the contact resistance implies that there is a high density of electronic traps in the organic side.

  3. TLM-Quant: An Open-Source Pipeline for Visualization and Quantification of Gene Expression Heterogeneity in Growing Microbial Cells

    PubMed Central

    Piersma, Sjouke; Denham, Emma L.; Drulhe, Samuel; Tonk, Rudi H. J.; Schwikowski, Benno; van Dijl, Jan Maarten

    2013-01-01

    Gene expression heterogeneity is a key driver for microbial adaptation to fluctuating environmental conditions, cell differentiation and the evolution of species. This phenomenon has therefore enormous implications, not only for life in general, but also for biotechnological applications where unwanted subpopulations of non-producing cells can emerge in large-scale fermentations. Only time-lapse fluorescence microscopy allows real-time measurements of gene expression heterogeneity. A major limitation in the analysis of time-lapse microscopy data is the lack of fast, cost-effective, open, simple and adaptable protocols. Here we describe TLM-Quant, a semi-automatic pipeline for the analysis of time-lapse fluorescence microscopy data that enables the user to visualize and quantify gene expression heterogeneity. Importantly, our pipeline builds on the open-source packages ImageJ and R. To validate TLM-Quant, we selected three possible scenarios, namely homogeneous expression, highly ‘noisy’ heterogeneous expression, and bistable heterogeneous expression in the Gram-positive bacterium Bacillus subtilis. This bacterium is both a paradigm for systems-level studies on gene expression and a highly appreciated biotechnological ‘cell factory’. We conclude that the temporal resolution of such analyses with TLM-Quant is only limited by the numbers of recorded images. PMID:23874729

  4. Intraoperative narrow band imaging better delineates superficial resection margins during transoral laser microsurgery for early glottic cancer.

    PubMed

    Garofolo, Sabrina; Piazza, Cesare; Del Bon, Francesca; Mangili, Stefano; Guastini, Luca; Mora, Francesco; Nicolai, Piero; Peretti, Giorgio

    2015-04-01

    The high rate of positive margins after transoral laser microsurgery (TLM) remains a matter of debate. This study investigates the effect of intraoperative narrow band imaging (NBI) examination on the incidence of positive superficial surgical margins in early glottic cancer treated by TLM. Between January 2012 and October 2013, 82 patients affected by Tis-T1a glottic cancer were treated with TLM by type I or II cordectomies. Intraoperative NBI evaluation was performed using 0-degree and 70-degree rigid telescopes. Surgical specimens were oriented by marking the superior edge with black ink and sent to a dedicated pathologist. Comparison between the rate of positive superficial margins in the present cohort and in a matched historical control group treated in the same way without intraoperative NBI was calculated by chi-square test. At histopathological examination, all surgical margins were negative in 70 patients, whereas 7 had positive deep margins, 2 close, and 3 positive superficial margins. The rate of positive superficial margins was thus 3.6% in the present group and 23.7% in the control cohort (P<.001). Routine use of intraoperative NBI increases the accuracy of neoplastic superficial spreading evaluation during TLM for early glottic cancer. © The Author(s) 2014.

  5. Transoral laser microsurgery for managing laryngeal stenosis after reconstructive partial laryngectomies.

    PubMed

    Lucioni, Marco; Bertolin, Andy; Lionello, Marco; Giacomelli, Luciano; Ghirardo, Guido; Rizzotto, Giuseppe; Marioni, Gino

    2017-02-01

    To retrospectively analyze our experience of transoral laser microsurgery (TLM) for treating postoperative laryngeal obstruction (POLO) after supracricoid and supratracheal laryngectomy (open partial horizontal laryngectomy [OPHL]) types 2 and 3, and to investigate potential relationships between patients' clinical features and their functional outcomes. A retrospective cohort study. The prognostic influence of clinical and surgical parameters on functional outcomes was investigated in a univariate statistical setting in terms of decannulation rate (DR), time to tracheostomy closure (TTC), and number of laser procedures required (NLP). OPHL type 2 was associated with a better functional outcome than OPHL type 3 in terms of DR, TTC, and NLP (P = .03, P = .02, and P = .02, respectively). Annular and semicircumferential stenoses developed more frequently after OPHL type 3, and were particularly difficult to manage with TLM. Fixation of the residual arytenoid was a negative prognostic factor in terms of functional outcome in terms of DR, TTC, and NLP (P = .0002, P = .08, and P = .08, respectively). There is no standardized laser treatment for POLO; it must be tailored to individual patients. Identifying prognostic factors influencing functional outcome could help surgeons to earmark patients less likely to benefit from TLM for the treatment of POLO, and enable an adequate preoperative counseling, given the high probability of repeat postoperative TLM procedures. 4 Laryngoscope, 2016 127:359-365, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Acute and subacute toxicity of copper sulfate pentahydrate (CuSO(4)5.H(2)O) in the guppy (Poecilia reticulata).

    PubMed

    Park, Keehae; Heo, Gang-Joon

    2009-03-01

    Chemicals are used for treatment of aquatic diseases, but there is little data available about copper sulfate in small ornamental fish. The aim of the present study was to determine the TLm(24h) and evaluate the toxicity of copper sulfate in the guppy (Poecilia reticulata). The fish were subjected to an acute toxicity test for 24 hr, and the results showed a TLm(24h) value of 1.17 ppm. Severe hyperplasia and exfoliation of the epithelial cells of gill lamellae and obstruction of the internal cavities of renal tubules with necrotized renal epithelial cells sloughed from the basement membrane were observed. However, no significant changes, except for mild curling of gill lamellae, were found in a subacute toxicity test in which fish were exposed to 1/10 of the TLm(24h) value for 1 week. Therefore, use of less than 0.12 ppm of copper sulfate may be recommended as a therapeutic level.

  7. Glottic and supraglottic pT3 squamous cell carcinoma: outcomes with transoral laser microsurgery.

    PubMed

    Pantazis, Dimitrios; Liapi, Georgia; Kostarelos, Dimitrios; Kyriazis, Georgios; Pantazis, Theodoros-Leonidas; Riga, Maria

    2015-08-01

    Patients diagnosed with T3 squamous cell laryngeal carcinomas are nowadays offered either organ-preserving surgical or non-surgical treatment, with the optimum approach remaining undefined. No direct comparison of organ-preserving therapeutical options, stratified by anatomical subsites is available in the literature. The aim of this study is to present institutional treatment outcomes for laser-assisted microsurgery (TLM) of laryngeal T3 squamous cell carcinomas and review the relevant literature. Sixty-four consecutive, previously untreated patients were evaluated. Twenty-four supraglottic and 19 glottic patients were treated with TLM and neck dissection, tumor exposure and postoperative upstaging of the tumors through pathology evaluation of the specimens being the only exclusion criteria. Five-year disease-specific survival and organ preservation rates for supraglottic carcinomas were both 91.7 %. The respective values for glottic carcinomas were 63.2 and 73.3 %. TLM-treated T3 supraglottic tumors seem to attribute better outcomes than T3 glottic tumors in terms of recurrence-free survival, organ preservation and local control (p = 0.01, <0.0001 and 0.01, respectively). The results of this study suggest that TLM-treated T3 supraglottic tumors have a good prognosis, substantially better than that of glottic tumors. A literature review, on the other hand, attributes to chemo-radiation-treated T3 supraglottic tumors a considerably poorer prognosis. Further studies of homogenous populations in terms of anatomical subsites are needed in order to reach a consensus regarding treatment of T3 laryngeal tumors.

  8. Four dimensional variational inversion of atmospheric chemical sources in WRFDA

    NASA Astrophysics Data System (ADS)

    Guerrette, J. J.

    Atmospheric aerosols are known to affect health, weather, and climate, but their impacts on regional scales are uncertain due to heterogeneous source, transport, and transformation mechanisms. The Weather Research and Forecasting model with chemistry (WRF-Chem) can account for aerosol-meteorology feedbacks as it simultaneously integrates equations of dynamical and chemical processes. Here we develop and apply incremental four dimensional variational (4D-Var) data assimilation (DA) capabilities in WRF-Chem to constrain chemical emissions (WRFDA-Chem). We develop adjoint (ADM) and tangent linear (TLM) model descriptions of boundary layer mixing, emission, aging, dry deposition, and advection of black carbon (BC) aerosol. ADM and TLM model performance is verified against finite difference derivative approximations. A second order checkpointing scheme is used to reduce memory costs and enable simulations longer than six hours. We apply WRFDA-Chem to constraining anthropogenic and biomass burning sources of BC throughout California during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign. Manual corrections to the prior emissions and subsequent inverse modeling reduce the spread in total emitted BC mass between two biomass burning inventories from a factor of x10 to only x2 across three days of measurements. We quantify posterior emission variance using an eigendecomposition of the cost function Hessian matrix. We also address the limited scalability of 4D-Var, which traditionally uses a sequential optimization algorithm (e.g., conjugate gradient) to approximate these Hessian eigenmodes. The Randomized Incremental Optimal Technique (RIOT) uses an ensemble of TLM and ADM instances to perform a Hessian singular value decomposition. While RIOT requires more ensemble members than Lanczos requires iterations to converge to a comparable posterior control vector, the wall-time of RIOT is x10 shorter since the ensemble is executed in parallel. This work demonstrates that RIOT improves the scalability of 4D-Var for high-dimensional nonlinear problems. Overall, WRFDA-Chem and RIOT provide a framework for air quality forecasting, campaign planning, and emissions constraint that can be used to refine our understanding of the interplay between atmospheric chemistry, meteorology, climate, and human health.

  9. Beneficial Effects of cART Initiated during Primary and Chronic HIV-1 Infection on Immunoglobulin-Expression of Memory B-Cell Subsets.

    PubMed

    Pogliaghi, Manuela; Ripa, Marco; Pensieroso, Simone; Tolazzi, Monica; Chiappetta, Stefania; Nozza, Silvia; Lazzarin, Adriano; Tambussi, Giuseppe; Scarlatti, Gabriella

    2015-01-01

    During HIV-1 infection the B-cell compartment undergoes profound changes towards terminal differentiation, which are only partially restored by antiretroviral therapy (cART). To investigate the impact of infection as early as during primary HIV-1 infection (PHI) we assessed distribution of B-cell subsets in 19 PHI and 25 chronic HIV-1-infected (CHI) individuals before and during 48 weeks of cART as compared to healthy controls (n = 23). We also analysed Immunoglobulin-expression of memory B-cell subsets to identify alterations in Immunoglobulin-maturation. Determination of B-cell subsets at baseline showed that total and Naive B-cells were decreased whereas Activated Memory (AM), Tissue-like Memory (TLM) B-cells and Plasma cells were increased in both PHI and CHI patients. After 4 weeks of cART total B-cells increased, while AM, TLM B-cells and Plasma cells decreased, although without reaching normal levels in either group of individuals. This trend was maintained until week 48, though only total B-cells normalized in both PHI and CHI. Resting Memory (RM) B-cells were preserved since baseline. This subset remained stable in CHI, while was expanded by an early initiation of cART during PHI. Untreated CHI patients showed IgM-overexpression at the expenses of switched (IgM-IgD-) phenotypes of the memory subsets. Interestingly, in PHI patients a significant alteration of Immunoglobulin-expression was evident at BL in TLM cells, and after 4 weeks, despite treatment, in AM and RM subsets. After 48 weeks of therapy, Immunoglobulin-expression of AM and RM almost normalized, but remained perturbed in TLM cells in both groups. In conclusion, aberrant activated and exhausted B-cell phenotypes rose already during PHI, while most of the alterations in Ig-expression seen in CHI appeared later, despite 4 weeks of effective cART. After 48 weeks of cART B-cell subsets distribution improved although without full normalization, while Immunoglobulin-expression normalized among AM and RM, remaining perturbed in TLM B-cells of PHI and CHI.

  10. Formal verification of a set of memory management units

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.

    1992-01-01

    This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.

  11. Linear and nonlinear equivalent circuit modeling of CMUTs.

    PubMed

    Lohfink, Annette; Eccardt, Peter-Christian

    2005-12-01

    Using piston radiator and plate capacitance theory capacitive micromachined ultrasound transducers (CMUT) membrane cells can be described by one-dimensional (1-D) model parameters. This paper describes in detail a new method, which derives a 1-D model for CMUT arrays from finite-element methods (FEM) simulations. A few static and harmonic FEM analyses of a single CMUT membrane cell are sufficient to derive the mechanical and electrical parameters of an equivalent piston as the moving part of the cell area. For an array of parallel-driven cells, the acoustic parameters are derived as a complex mechanical fluid impedance, depending on the membrane shape form. As a main advantage, the nonlinear behavior of the CMUT can be investigated much easier and faster compared to FEM simulations, e.g., for a design of the maximum applicable voltage depending on the input signal. The 1-D parameter model allows an easy description of the CMUT behavior in air and fluids and simplifies the investigation of wave propagation within the connecting fluid represented by FEM or transmission line matrix (TLM) models.

  12. Functional Outcomes after Salvage Transoral Laser Microsurgery for Laryngeal Squamous Cell Carcinoma.

    PubMed

    Fink, Daniel S; Sibley, Haley; Kunduk, Melda; Schexnaildre, Mell; Sutton, Collin; Kakade-Pawar, Anagha; McWhorter, Andrew J

    2016-10-01

    Transoral laser microsurgery (TLM) has been increasingly used in lieu of total laryngectomy to treat malignancy after definitive radiation. There are few data in the literature regarding functional outcomes. We retrospectively reviewed voice and swallowing outcomes in patients who underwent TLM for recurrent laryngeal carcinoma. Case series with chart review. Tertiary care center. Forty-two patients were identified with recurrent squamous cell carcinoma of the larynx after definitive radiation therapy from 2001 to 2013: 28 patients with glottic recurrence and 14 with supraglottic recurrence. Swallowing outcomes were evaluated by gastrostomy tube dependence, the MD Anderson Dysphagia Inventory, and the Functional Oral Intake Scale. Voice outcomes were evaluated by the Voice Handicap Index and observer-rated perceptual analysis. No significant difference was noted between mean pre- and postoperative MD Anderson Dysphagia Inventory scores: 78.25 and 74.9, respectively (P = .118, t = 1.6955). Mean Functional Oral Intake Scale scores after TLM for supraglottic and glottic recurrences were 6.4 and 6.6, respectively. Of 42 patients, 17 (40.5%) required a gastrostomy tube either during radiation or in conjunction with the salvage procedure. Of 17 patients, 15 resumed sufficient oral diet for tube removal. Patients' mean Voice Handicap Index score did increase from 34.3 to 51.5 (P = .047), and their mean perceptual score did decrease from 60.0 to 45.3 (P = .005). However, at 1-year follow-up, there was no significant difference in perceptual score: 61.1 to 57.1 (P = .722). TLM is a successful surgical option for recurrent laryngeal cancer with acceptable functional outcomes. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  13. Examining the Associations among Fibrocystic Breast Change, Total Lean Mass, and Percent Body Fat.

    PubMed

    Chen, Yuan-Yuei; Fang, Wen-Hui; Wang, Chung-Ching; Kao, Tung-Wei; Chang, Yaw-Wen; Yang, Hui-Fang; Wu, Chen-Jung; Sun, Yu-Shan; Chen, Wei-Liang

    2018-06-15

    Fibrocystic breast change (FBC) is extremely common and occurrs in 90% of women during their lives. The association between body composition and risk of breast cancer is well established. We hypothesized that the effect might exist during the development of FBC. Our aim was to examine the relationships of total lean mass (TLM) and percent body fat (PBF) with FBC in a general female population. In total, 8477 female subjects aged 20 years or older were enrolled in the study at the Tri-Service General Hospital in Taiwan from 2011 to 2016. Comprehensive examinations including biochemical data, measurements of body composition and breast ultrasound were performed. PBF was positively associated with the presence of FBC (OR = 1.039, 95%CI: 1.018-1.060), and TLM showed the opposite result (OR = 0.893, 95%CI: 0.861-0.926). Condition of metabolic syndrome (MetS), diabetes (DM) and fatty liver modified the association between PBF and FBC (P < 0.001, P = 0.032 and P = 0.007, respectively). Female subjects diagnosed with MetS, DM, and fatty liver had higher risk of developing FBC than control subjects (OR = 1.110, 95%CI: 1.052-1.171; OR = 1.144, 95%CI: 1.024-1.278; OR = 1.049, 95%CI: 1.019, 1.080). Those with higher PBF (for highest quartile versus lowest, OR = 2.451, 95%CI: 1.523-3.944) or lower TLM (for highest quartile versus lowest, OR = 0.279, 95%CI: 0.171-0.455) had increased risk of developing FBC. In conclusion, increased PBF and reduced TLM were likely to predict the risk of the presence of FBC in a general female population.

  14. Vertical distribution of overpotentials and irreversible charge losses in lithium ion battery electrodes.

    PubMed

    Klink, Stefan; Schuhmann, Wolfgang; La Mantia, Fabio

    2014-08-01

    Porous lithium ion battery electrodes are characterized using a vertical distribution of cross-currents. In an appropriate simplification, this distribution can be described by a transmission line model (TLM) consisting of infinitely thin electrode layers. To investigate the vertical distribution of currents, overpotentials, and irreversible charge losses in a porous graphite electrode in situ, a multi-layered working electrode (MWE) was developed as the experimental analogue of a TLM. In this MWE, each layer is in ionic contact but electrically insulated from the other layers by a porous separator. It was found that the negative graphite electrodes get lithiated and delithiated stage-by-stage and layer-by-layer. Several mass-transport- as well as non-mass-transport-limited processes could be identified. Local current densities can reach double the average, especially on the outermost layer at the beginning of each intercalation stage. Furthermore, graphite particles close to the counter electrode act as "electrochemical sieve" reducing the impurities present in the electrolyte such as water. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Immediate liposuction could shorten the time for endoscopic axillary lymphadenectomy in breast cancer patients.

    PubMed

    Shi, Fujun; Huang, Zonghai; Yu, Jinlong; Zhang, Pusheng; Deng, Jianwen; Zou, Linhan; Zhang, Cheng; Luo, Yunfeng

    2017-01-31

    Endoscopic axillary lymphadenectomy (EALND) was introduced to clinical work to reduce side effects of conventional axillary lymphadenectomy, while the lipolysis and liposuction of EALND made the process consume more time. The aim of the study was to determine whether immediate liposuction after tumescent solution injection to the axilla could shorten the total time of EALND. Fifty-nine patients were enrolled in the study, 30 of them received EALND with traditional liposuction method (TLM), and the rest 29 patients received EALND with immediate liposuction method (ILM). The operation time, cosmetic result, drainage amount, and hospitalization time of the two groups were compared. The median EALND operation time of TLM group and ILM group were 68 and 46 min, respectively, the difference was significant (P < 0.05); the median cosmetic results of the two groups were 6.6 and 6.4, respectively; the median drainage amount of the two groups were 366 and 385 ml, respectively; the hospitalization time of the two groups were 15 and 16 days, respectively. For the last three measures, no significant difference was confirmed (P > 0.05). Our work suggests immediate liposuction could shorten the endoscopic axillary lymphadenectomy process, and this method would not compromise the operation results. However, due to the limitations of the research, more work needs to be done to prove the availability and feasibility of immediate liposuction.

  16. [Embryo selection in IVF/ICSI cycles using time-lapse microscopy and the clinical outcomes].

    PubMed

    Chen, Minghao; Huang, Jun; Zhong, Ying; Quan, Song

    2015-12-01

    To compare the clinical outcomes of embryos selected using time-lapse microscopy and traditional morphological method in IVF/ICSI cycles and evaluate the clinical value of time-lapse microscopy in early embryo monitoring and selection. e retrospectively analyzed the clinical data of 139 IVF/ICSI cycles with embryo selection based on time-lapse monitoring (TLM group, n=68) and traditional morphological method (control group, n=71). The βHCG-positive rate, clinical pregnancy rate and embryo implantation rate were compared between the 2 groups. Subgroup analysis was performed in view of female patients age and the fertilization type. The βHCG-positive rate, clinical pregnancy rate and implantation rate were 66.2%, 61.8% and 47.1% in TLM group, significantly higher than those in the control group (47.9%, 43.7% and 30.3%, respectively; P<0.05). Compared with patients below 30 years of age, patients aged between 31 and 35 years benefited more from time-lapse monitoring with improved clinical outcomes. time-lapse monitoring significantly increased the βHCG-positive rate, clinical pregnancy rate and implantation rate for patients undergoing IVF cycles, but not for those undergoing ICSI or TESA cycles. Compared with those selected using traditional morphological method, the embryos selected with time-lapse microscopy have better clinical outcomes, especially in older patients (31-35 years of age) and in IVF cycles.

  17. Selective Dry Etch for Defining Ohmic Contacts for High Performance ZnO TFTs

    DTIC Science & Technology

    2014-03-27

    scale, high-frequency ZnO thin - film transistors (TFTs) could be fabricated. Molybdenum, tantalum, titanium tungsten 10-90, and tungsten metallic contact... thin - film transistor layout utilized in the thesis research . . . . . 42 3.4 Process Flow Diagram for Optical and e-Beam Devices...TFT thin - film transistor TLM transmission line model UV ultra-violet xvii SELECTIVE DRY ETCH FOR DEFINING OHMIC CONTACTS FOR HIGH PERFORMANCE ZnO TFTs

  18. Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure. Parts 1 and 2

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Defining the electromagnetic environment inside a graphite composite fairing due to lightning is of interest to spacecraft developers. This paper is the first in a two part series and studies the shielding effectiveness of a graphite composite model fairing using derived equivalent properties. A frequency domain Method of Moments (MoM) model is developed and comparisons are made with shielding test results obtained using a vehicle-like composite fairing. The comparison results show that the analytical models can adequately predict the test results. Both measured and model data indicate that graphite composite fairings provide significant attenuation to magnetic fields as frequency increases. Diffusion effects are also discussed. Part 2 examines the time domain based effects through the development of a loop based induced field testing and a Transmission-Line-Matrix (TLM) model is developed in the time domain to study how the composite fairing affects lightning induced magnetic fields. Comparisons are made with shielding test results obtained using a vehicle-like composite fairing in the time domain. The comparison results show that the analytical models can adequately predict the test and industry results.

  19. Real-Time Noninvasive Assessment of Pancreatic ATP Levels During Cold Preservation

    PubMed Central

    Scott, W.E.; Matsumoto, S.; Tanaka, T.; Avgoustiniatos, E.S.; Graham, M.L.; Williams, P.C.; Tempelman, L.A.; Sutherland, D.E.; Hering, B.J.; Hammer, B.E.; Papas, K.K.

    2008-01-01

    31P-NMR spectroscopy was utilized to investigate rat and porcine pancreatic ATP:Pi ratios to assess the efficacy of existing protocols for cold preservation (CP) in maintaining organ quality. Following sacrifice, rat pancreata were immediately excised or left enclosed in the body for 15 minutes of warm ischemia (WI). After excision, rat pancreata were stored at 6°C to 8°C using histidine-tryptophan-ketoglutarate solution (HTK) presaturated with air (S1), HTK presaturated with O2 (S2), or the HTK/perfluorodecalin two-layer method (TLM) with both liquids presaturated with O2 (S3). 31P-NMR spectra were sequentially collected at 3, 6, 9, 12, and 24 hours of CP from pancreata stored with each of the three protocols examined. The ATP:Pi ratio for rat pancreata exposed to 15 minutes of WI and stored with S3 increased during the first 9 hours of CP, approaching values observed for organs procured with no WI. A marked reduction in the ATP:Pi ratio was observed beyond 12 hours of CP with S3. After 6 hours of CP, the ATP:Pi ratio was highest for S3, substantially decreased for S2, and below detection for S1. In sharp contrast to the rat model, ATP was barely detectable in porcine pancreata exposed to minimal warm ischemia (<15 minutes) stored with the TLM regardless of CP time. We conclude that 31P-NMR spectroscopy is a powerful tool that can be used to (1) noninvasively evaluate pancreata prior to islet isolation, (2) assess the efficacy of different preservation protocols, (3) precisely define the timing of reversible versus irreversible damage, and (4) assess whether intervention will extend this timing. PMID:18374082

  20. Comparative cold resistance of three Columbia River organisms. [Thermal stresses, fishes, Crustaceans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, C.H.; Genoway, R.G.; Schneider, M.J.

    1977-01-01

    Resistance to abrupt and gradual cold shock was determined in bioassays with pumpkinseed (Lepomis gibbosus), rainbow trout (Salmo gairdneri) and a northwestern crayfish (Pacifastacus leniusculus) acclimated to higher temperatures at 5 C increments. Test criteria were median tolerance limits (TLm) for 96-h exposures after abrupt cold shock, and 50% loss of equilibrium (LE50) for decline rates of 18, 15, 10, 5 and 1 C/h during gradual cold shock. Cold resistance depended on original acclimation temperature (AT) and varied among species under both test conditions in the order: pumpkinseed < rainbow trout < crayfish. The lower TLm limit for pumpkinseed wasmore » 12.3 C at 30 C AT, 9.6 C at 25 C AT, 4.5 C at 20 C AT, and 2.7 C at 15 C AT. Rainbow trout at 20, 15 and 10 C AT survived abrupt exposures to cold down to 3.3, 1.4 and 0.5 C, respectively. Crayfish at 25, 20 and 15 C AT survived exposures down to 2.5, 0.4 and 0.0 C, respectively. TLm values were slightly above LE50 values for both fish species but well below for crayfish. Partial adaptation significantly lowered LE values at decline rates below 18 C/h for pumpkinseed, and to a lesser extent for the other two species, thus extending the lower margin of cold resistance.« less

  1. Transmission and reflection of the fundamental Lamb modes in a metallic plate with a semi-infinite horizontal crack.

    PubMed

    Ramadas, C; Hood, Avinash; Khan, Irfan; Balasubramaniam, Krishnan; Joshi, M

    2013-03-01

    Numerical simulations were carried out to quantify the reflection and transmission characteristics of the fundamental Lamb modes propagating in aluminium sub-plates, which are formed due to a semi-infinite horizontal crack. It was observed that, a Lamb mode propagating in a sub-plate when incident at the edge of a crack, undergoes reflection and transmits through the main plate, as well as the other sub-plate. The mode transmitted through the sub-plate has been termed the 'Turning Lamb Mode' (TLM). Furthermore, a mode converted mode also propagates along with the TLM. This mode has been termed the 'Mode Converted Turning Lamb Mode' (MCTLM). Reflection and transmission characteristics of the fundamental Lamb modes in aluminium sub-plates were studied at frequencies 150 kHz, 175 kHz, and 200 kHz. Experiments conducted to validate the observations made in numerical simulations, confirmed that the transmission and reflection characteristics depend on the thickness ratio. From this study it is surmised that when a Lamb mode propagates through a plate containing horizontal crack, the TLM and the MCTLM start propagating from one sub-plate to the other at the rear edge of the crack and amplitude of these modes depends on the location of the crack across the plate thickness. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Encapsulation efficiency of CdSe/ZnS quantum dots by liposomes determined by thermal lens microscopy

    PubMed Central

    Batalla, Jessica; Cabrera, Humberto; San Martín-Martínez, Eduardo; Korte, Dorota; Calderón, Antonio; Marín, Ernesto

    2015-01-01

    In this study the encapsulation of core shell carboxyl CdSe/ZnS quantum dots (QDs) by phospholipids liposome complexes is presented. It makes the quantum dots water soluble and photo-stable. Fluorescence self-quenching of the QDs inside the liposomes was observed. Therefore, the thermal lens microscopy (TLM) was found to be an useful tool for measuring the encapsulation efficiency of the QDs by the liposomes, for which an optimum value of 36% was determined. The obtained limit of detection (LOD) for determining QDs concentration by TLM was 0.13 nM. Moreover, the encapsulated QDs showed no prominent cytotoxicity toward Breast cancer cells line MDA-MB-231. This study was supported by UV-visible spectroscopy, high resolution transmission electron microscopy (HRTEM) and dynamic light scattering measurements (DLS). PMID:26504640

  3. Spitzer Telemetry Processing System

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.

    2013-01-01

    The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.

  4. Risk factors for islet loss during culture prior to transplantation.

    PubMed

    Kin, Tatsuya; Senior, Peter; O'Gorman, Doug; Richer, Brad; Salam, Abdul; Shapiro, Andrew Mark James

    2008-11-01

    Culturing islets can add great flexibility to a clinical islet transplant program. However, a reduction in the islet mass has been frequently observed during culture and its degree varies. The aim of this study was to identify the risk factors associated with a significant islet loss during culture. One-hundred and four islet preparations cultured in an attempt to use for transplantation constituted this study. After culture for 20 h (median), islet yield significantly decreased from 363 309 +/- 12 647 to 313 035 +/- 10 862 islet equivalent yield (IE) (mean +/- SE), accompanied by a reduction in packed tissue volume from 3.9 +/- 0.1 to 3.0 +/- 0.1 ml and islet index (IE/islet particle count) from 1.20 +/- 0.04 to 1.05 +/- 0.04. Culture did not markedly alter islet purity or percent of trapped islet. Morphology score and viability were significantly improved after culture. Of 104 islet preparations, 37 suffered a substantial islet loss (> 20%) over culture. Factors significantly associated with risk of islet loss identified by univariate analysis were longer cold ischemia time, two-layer method (TLM) preservation, lower islet purity, and higher islet index. Multivariate analysis revealed that independent predictors of islet loss were higher islet index and the use of TLM. This study provides novel information on the link between donor- isolation factors and islet loss during culture.

  5. Report of Survey Conducted at Bell Helicopter Textron, Inc., Fort Worth, Texas

    DTIC Science & Technology

    1988-10-01

    19 Automated Tape Laying ......................................................................... 20 Filam... automated tape laying for the lower wing skin of the V-22 aircraft. BHTI uses a 10-axis higersoll tape laying machine (TLM) which has up to a +30

  6. Trunk lean mass and its association with 4 different measures of thoracic kyphosis in older community dwelling persons

    PubMed Central

    Yamamoto, J.; Bergstrom, J.; Davis, A.; Wing, D.; Schousboe, J. T.; Nichols, J. F.

    2017-01-01

    Background The causes of age-related hyperkyphosis (HK) include osteoporosis, but only 1/3 of those most severely affected have vertebral fractures, suggesting that there are other important, and potentially modifiable causes. We hypothesized that muscle mass and quality may be important determinants of kyphosis in older persons. Methods We recruited 72 persons >65 years to participate in a prospective study designed to evaluate kyphosis and fall risk. At the baseline visit, participants had their body composition measures completed using Dual Energy X-ray Absorptiometry (DXA). They had kyphosis measured in either the standing [S] or lying [L] position: 1) Cobb angle from DXA [L]; 2) Debrunner kyphometer [S]; 3) architect’s flexicurve ruler [S]; and 4) blocks method [L]. Multivariable linear/logistic regression analyses were done to assess the association between each body composition and 4 kyphosis measures. Results Women (n = 52) were an average age of 76.8 (SD 6.7) and men 80.5 (SD 7.8) years. They reported overall good/excellent health (93%), the average body mass index was 25.3 (SD 4.6) and 35% reported a fall in the past year. Using published cut-offs, about 20–30% were determined to have HK. For the standing assessments of kyphosis only, after adjusting for age, sex, weight and hip BMD, persons with lower TLM were more likely to be hyperkyphotic. Conclusions Lower TLM is associated with HK in older persons. The results were stronger when standing measures of kyphosis were used, suggesting that the effects of muscle on thoracic kyphosis are best appreciated under spinal loading conditions. PMID:28369088

  7. Presence of S100A8/Gr1-Positive Myeloid-Derived Suppressor Cells in Primary Tumors and Visceral Organs Invaded by Breast Carcinoma Cells.

    PubMed

    Tanriover, Gamze; Eyinc, Mehmet Berk; Aliyev, Elnur; Dilmac, Sayra; Erin, Nuray

    2018-04-26

    Increased S100A8/A9 expression in Gr1-positive cells has been shown in myeloid-derived suppressor cells and may play a role in the formation of a metastatic milieu. We aimed to determine S100A8/A9 expression alone and with coexpression of Gr1 (a myeloid marker) in primary tumor and visceral tissues invaded by metastatic breast carcinoma. Female BALB/c mice were injected with 4TLM, 4THM, and 67NR orthotopically. Confluent cells (75%-80%) were used. Primary tumor, lung, liver, and spleen tissue samples were removed 26 days after injection. Peripheral blood smears and metastasis assay were performed, as was immunohistochemistry and staining. S100A8/A9 immunoreactivity alone or coexpressed with Gr1 was found in primary tumors formed by 4TLM and 4THM cells, which was markedly higher than in primary tumors formed by nonmetastatic 67NR cells. Similarly, liver and lung tissues obtained from mice injected with 4TLM or 4THM cells were invaded by S100A8/A9-positive and Gr1-positive cells. Double-positive cells were markedly fewer in liver and lung tissues of animals injected with 67NR cells. S100A8/A9-positive cells were mostly localized in red pulp of spleens. We observed an increased number of neutrophils in the peripheral blood of mice injected with metastatic breast carcinoma cells. Tumor-derived factors may increase S100A8/A9-positive cells locally and systemically, and S100A8/A9-positive cells may provide an appropriate milieu for the formation of metastasis. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. CD101, a Novel Echinocandin, Possesses Potent Antibiofilm Activity against Early and Mature Candida albicans Biofilms.

    PubMed

    Chandra, Jyotsna; Ghannoum, Mahmoud A

    2018-02-01

    Currently available echinocandins are generally effective against Candida biofilms, but the recent emergence of resistance has underscored the importance of developing new antifungal agents that are effective against biofilms. CD101 is a long-acting novel echinocandin with distinctive pharmacokinetic properties and improved stability and safety relative to other drugs in the same class. CD101 is currently being evaluated as a once-weekly intravenous (i.v.) infusion for the treatment of candidemia and invasive candidiasis. In this study, we determined (i) the effect of CD101 against early and mature phase biofilms formed by C. albicans in vitro and (ii) the temporal effect of CD101 on the formation of biofilms using time-lapse microscopy (TLM). Early- or mature-phase biofilms were formed on silicone elastomer discs and were exposed to the test compounds for 24 h and quantified by measuring their metabolic activity. Separate batches were observed under a confocal microscope or used to capture TLM images from 0 to 16 h. Measurements of their metabolic activity showed that CD101 (0.25 or 1 μg/ml) significantly prevented adhesion-phase cells from developing into mature biofilms ( P = 0.0062 or 0.0064, respectively) and eradicated preformed mature biofilms ( P = 0.04 or 0.01, respectively) compared to those of untreated controls. Confocal microscopy showed significant reductions in biofilm thicknesses for both early and mature phases ( P < 0.05). TLM showed that CD101 stopped the growth of adhesion- and early-phase biofilms within minutes. CD101-treated hyphae failed to grow into mature biofilms. These results suggest that CD101 may be effective in the prevention and treatment of biofilm-associated nosocomial infections. Copyright © 2018 Chandra and Ghannoum.

  9. Reconstructive transoral laser microsurgery for posterior glottic web with stenosis.

    PubMed

    Atallah, Ihab; Manjunath, M Krishniah; Omari, Ahmad Al; Righini, Christian Adrien; Castellanos, Paul F

    2017-03-01

    To demonstrate that reconstructive transoral laser microsurgical (R-TLM) techniques can be used for the treatment of symptomatic laryngeal posterior glottic web-based stenosis (PGWS) in a large cohort of patients utilizing a postcricoid mucosal advancement flap (PCMAF). Retrospective cohort review. A consecutive series of patients with PGWS who underwent R-TLM using a PCMAF were reviewed for outcomes. After laser excision of the PGWS scar and mobilization of fixed cricoarytenoid joints, a PCMAF was raised using microinstruments and a scanning free-beam CO 2 laser. The flap was advanced and attached over the scar bed using a technique with multiple novel features that make it easy to adopt. Fifty-two patients were treated. Of the cases, 42.3% had a tracheostomy at presentation with grade II to IV PGWS, and 46% of cases had grade III to IV PGWS. In all cases, R-TLM was the only treatment approach. No open reconstructions were performed. No airway stents were used. Patients without tracheostomy, regardless of the grade of stenosis, did not require a tracheostomy to undergo this operation. All tracheostomy patients were successfully decannulated. All patients without a tracheostomy had significant improvement of their respiratory symptoms on the Dyspnea Index (mean Δ = 14.75, P value <.01). RTLM using the PCMAF is a feasible, safe, and effective alternative to open approaches for airway reconstruction for PGWS. This novel transoral technique includes a much simpler endoscopic suturing alternative to knot tying among other new features. It is reproducible and reliable for laryngologists familiar with laryngeal microsurgery. 4. Laryngoscope, 127:685-690, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Secretomes reveal several novel proteins as well as TGF-β1 as the top upstream regulator of metastatic process in breast cancer.

    PubMed

    Erin, Nuray; Ogan, Nur; Yerlikaya, Azmi

    2018-03-20

    Metastatic breast cancer is resistant to many conventional treatments and novel therapeutic targets are needed. We previously isolated subsets of 4T1 murine breast cancer cells which metastasized to liver (4TLM), brain (4TBM), and heart (4THM). Among these cells, 4TLM is the most aggressive one, demonstrating mesenchymal phenotype. Here we compared secreted proteins from 4TLM, 4TBM, and 4THM cells and compared with that of hardly metastatic 67NR cells to detect differentially secreted factors involved in organ-specific metastasis. Label-free LC-MS/MS proteomic technique was used to detect the differentially secreted proteins. Eighty-five of over 500 secreted proteins were significantly altered in metastatic breast cancer cells. Differential expression of several proteins such as fibulin-4, Bone Morphogenetic Protein 1, TGF-β1 MMP-3, MMP-9, and Thymic Stromal Lymphopoietin were further verified using ELISA or Western blotting. Many of these identified proteins were also present in human metastatic breast carcinomas. Annexin A1 and A5, laminin beta 1, Neutral alpha-glucosidase AB were commonly found at least in three out of six studies examined here. Ingenuity Pathway Analysis showed that proteins differentially secreted from metastatic cells are involved primarily in carcinogenesis and TGF-β1 is the top upstream regulator in all metastatic cells. Cells metastasized to different organs displayed significant differences in several of secreted proteins. Proteins differentially altered were fibronectin, insulin-like growth factor-binding protein 7, and Procollagen-lysine, 2-oxoglutarate 5-dioxygenase 1. On the other hand, many exosomal proteins were also common to all metastatic cells, demonstrating involvement of key universal factors in distant metastatic process.

  11. Phonons around a soliton in a continuum model of t-(CH)x

    NASA Astrophysics Data System (ADS)

    Ono, Y.; Terai, A.; Wada, Y.

    1986-05-01

    The eigenvalue problem for phonons around a soliton in a continuum model of trans-polyacetylene t-(CH)x, the so-called TLM model (Takayama et al, 1980), is reinvestigated using a kernel which satisfies the correct boundary condition. The three localized modes are reproduced, two with even parity and one with odd parity. The phase-shift analysis of the extended modes confirms their existence if the one-dimensional version of Levinson's theorem is applicable to the present problem. It is found that the phase shifts of even and odd modes differ from each other in the long-wavelength limit. The conclusion of Ito et al. (1984), that the scattering of phonons by the soliton is reflectionless, has to be modified in this limit, where phonons suffer reflection from the soliton.

  12. Cd1-xZnxTe photodetectors with transparent conductive ZnO contacts

    NASA Astrophysics Data System (ADS)

    Tang, Ke; Huang, Jian; Lu, Yuanxi; Hu, Yan; Shen, Yibin; Zhang, Jijun; Gu, Qingmiao; Wang, Linjun; Lu, Yicheng

    2018-03-01

    High quality Cd1-xZnxTe (CZT) films were prepared using the close-spaced sublimation (CSS) technique. CZT film UV (ultraviolet) photodetectors were fabricated with B and Ga co-doped ZnO (BGZO) transparent conductive interdigitated contacts. The contact properties of BGZO/CZT were investigated by the transmission line model (TLM). The results indicate that a good ohmic contact is formed between BGZO and CZT with a very low contact resistivity of about 0.26 Ω·cm2. Compared with CZT photodetectors with Au contacts, the detectors with BGZO contacts show a higher value of UV photo response.

  13. Conformational and mechanical changes of DNA upon transcription factor binding detected by a QCM and transmission line model.

    PubMed

    de-Carvalho, Jorge; Rodrigues, Rogério M M; Tomé, Brigitte; Henriques, Sílvia F; Mira, Nuno P; Sá-Correia, Isabel; Ferreira, Guilherme N M

    2014-04-21

    A novel quartz crystal microbalance (QCM) analytical method is developed based on the transmission line model (TLM) algorithm to analyze the binding of transcription factors (TFs) to immobilized DNA oligoduplexes. The method is used to characterize the mechanical properties of biological films through the estimation of the film dynamic shear moduli, G and G, and the film thickness. Using the Saccharomyces cerevisiae transcription factor Haa1 (Haa1DBD) as a biological model two sensors were prepared by immobilizing DNA oligoduplexes, one containing the Haa1 recognition element (HRE(wt)) and another with a random sequence (HRE(neg)) used as a negative control. The immobilization of DNA oligoduplexes was followed in real time and we show that DNA strands initially adsorb with low or non-tilting, laying flat close to the surface, which then lift-off the surface leading to final film tilting angles of 62.9° and 46.7° for HRE(wt) and HRE(neg), respectively. Furthermore we show that the binding of Haa1DBD to HRE(wt) leads to a more ordered and compact film, and forces a 31.7° bending of the immobilized HRE(wt) oligoduplex. This work demonstrates the suitability of the QCM to monitor the specific binding of TFs to immobilized DNA sequences and provides an analytical methodology to study protein-DNA biophysics and kinetics.

  14. Fostering Leadership Skills in Pre-Service Teachers

    ERIC Educational Resources Information Center

    Xu, Yuejin; Patmor, George

    2012-01-01

    Teacher leadership is about empowering teachers to take a more active role in school improvement. Current pathways to teacher leadership, namely the Teacher Leader Master (TLM) degree program and teacher-led professional development, mainly target in-service teachers. Less attention has been paid to teacher leadership training in current teacher…

  15. Trunk lean mass and its association with 4 different measures of thoracic kyphosis in older community dwelling persons.

    PubMed

    Yamamoto, J; Bergstrom, J; Davis, A; Wing, D; Schousboe, J T; Nichols, J F; Kado, D M

    2017-01-01

    The causes of age-related hyperkyphosis (HK) include osteoporosis, but only 1/3 of those most severely affected have vertebral fractures, suggesting that there are other important, and potentially modifiable causes. We hypothesized that muscle mass and quality may be important determinants of kyphosis in older persons. We recruited 72 persons >65 years to participate in a prospective study designed to evaluate kyphosis and fall risk. At the baseline visit, participants had their body composition measures completed using Dual Energy X-ray Absorptiometry (DXA). They had kyphosis measured in either the standing [S] or lying [L] position: 1) Cobb angle from DXA [L]; 2) Debrunner kyphometer [S]; 3) architect's flexicurve ruler [S]; and 4) blocks method [L]. Multivariable linear/logistic regression analyses were done to assess the association between each body composition and 4 kyphosis measures. Women (n = 52) were an average age of 76.8 (SD 6.7) and men 80.5 (SD 7.8) years. They reported overall good/excellent health (93%), the average body mass index was 25.3 (SD 4.6) and 35% reported a fall in the past year. Using published cut-offs, about 20-30% were determined to have HK. For the standing assessments of kyphosis only, after adjusting for age, sex, weight and hip BMD, persons with lower TLM were more likely to be hyperkyphotic. Lower TLM is associated with HK in older persons. The results were stronger when standing measures of kyphosis were used, suggesting that the effects of muscle on thoracic kyphosis are best appreciated under spinal loading conditions.

  16. Copper-Based OHMIC Contracts for the Si/SiGe Heterojunction Bipolar Transistor Structure

    NASA Technical Reports Server (NTRS)

    Das, Kalyan; Hall, Harvey

    1999-01-01

    Silicon based heterojunction bipolar transistors (HBT) with SiGe base are potentially important devices for high-speed and high-frequency microelectronics. These devices are particularly attractive as they can be fabricated using standard Si processing technology. However, in order to realize the full potential of devices fabricated in this material system, it is essential to be able to form low resistance ohmic contacts using low thermal budget process steps and have full compatibility with VLSI/ULSI processing. Therefore, a study was conducted in order to better understand the contact formation and to develop optimized low resistance contacts to layers with doping densities corresponding to the p-type SiGe base and n-type Si emitter regions of the HBTS. These as-grown doped layers were implanted with BF(sub 2) up to 1 X 10(exp 16)/CM(exp 2) and As up to 5 x 10(exp 15)/CM2, both at 30 keV for the p-type SiGe base and n-type Si emitter layers, respectively, in order to produce a low sheet resistance surface layer. Standard transfer length method (TLM) contact pads on both p and n type layers were deposited using an e-beam evaporated trilayer structure of Ti/CufTi/Al (25)A/1500A/250A/1000A). The TLM pads were delineated by a photoresist lift-off procedure. These contacts in the as-deposited state were ohmic, with specific contact resistances for the highest implant doses of the order of 10(exp -7) ohm-CM2 and lower.

  17. SoCRocket: A Virtual Platform for SoC Design

    NASA Astrophysics Data System (ADS)

    Fossati, Luca; Schuster, Thomas; Meyer, Rolf; Berekovic, Mladen

    2013-08-01

    Both in the commercial and in the aerospace domain, the continuous increase of transistor density on a single die is leading towards the production of more and more complex systems on a single chip, with an increasing number of components. This brought to the introduction of the System-On-Chip (SoC) architecture, that integrates on a single circuit all the elements of a full system. This strive for efficient utilization of the available silicon has triggered several paradigm shifts in system design. Similarly to what happened in the early 1990s, when VHDL and Verilog took over from schematic design, today SystemC and Transaction Level Modeling [1] are about to further raise the design abstraction level. Such descriptions have to be accurate enough to describe the entire system throughout the phases of its development, and has to provide enough flexibility to be refined iteratively up to the point where the actual device can be produced using current process technology. Besides requiring new languages and methodologies, the complexity of current and future SoCs (SCOC3 [16] and NGMP [5] are example in the space domain) forces the SoC design process to rely on pre-designed or third party components. Components obtained from different providers, and even those designed by different teams of the same company, may be heterogeneous on several aspects: design domains, interfaces, abstraction levels, granularity, etc. Therefore, component integration is required at system level. Only by applying design re-use it is possible to successfully and timely design such complex SoCs. This transition to new languages and design methods is also motivated by the implementation with software of an increasing amount of system functionalities. Hence the need for methodologies to enable early software development and which allow the analysis of the performance of the combined Hw/Sw system, as their design and configuration cannot be performed separately. Virtual Prototyping is a key approach in this sense, enabling embedded software developers to start development earlier in the system design cycle, and cutting the dependency on the physical system hardware. In order to successfully implement the described methodologies, it is requested to have access to the a wide selection of IP-Cores (and related SystemC/TLM models) and access to the latest Electronic Design Automation (EDA, [17]) tools. On the one hand, for what concerns the European Space landscape, such IP-Cores are provided by the European Space Agency [4] and a few other suppliers (e.g Aeroflex Gaisler with GRLIB [2]). On the other hand, for what concerns the related high abstraction models and related design methodologies (partly depicted in Figure 1), the European Space Agency, through the Braunschweig Technische Universitat, has started the development of the SoCRocket Virtual Platform [8]. Together with the Virtual Platform infrastructure SoCRocket contains a library of IP-Core models. The SoCRocket library has been built around the TrapGen LEON instruction set simulator [15]. The library contains a variety of SystemC simulation models such as caches, memory management unit, AMBA interconnect, memory controller, memories, interrupt controller, timer and more. All models are TLM2.0 compliant and come in both loosely-timed and approximately timed coding styles. As later-on presented more in detail, the runtime reconfiguration, the completeness of tools and models, as well as the fact that all simulation IPs have a freely available RTL counterpart differentiates SoCRocket from other commercially available Virtual Platforms. Moreover, due to their TLM2.0 compliance the provided models are not bound to the SoCRocket environment but they can be used with alternative tools, such as Cadence Virtual Platform [3] or Synopsys Platform Architect [10]. The paper is organized as follows: Section 2 presents the architecture of SoCRocket and the related library of SystemC models. Finally Section 3 shows how SoCRocket was used to optimize the design of a LEON3-based SoC targeted to the execution of an implementation of the CCSDS standard n.123 for the lossless compression of hyperspectral images.

  18. Improving tribological properties of Ti-5Zr-3Sn-5Mo-15Nb alloy by double glow plasma surface alloying

    NASA Astrophysics Data System (ADS)

    Guo, Lili; Qin, Lin; Kong, Fanyou; Yi, Hong; Tang, Bin

    2016-12-01

    Molybdenum, an alloying element, was deposited and diffused on Ti-5Zr-3Sn-5Mo-15Nb (TLM) substrate by double glow plasma surface alloying technology at 900, 950 and 1000 °C. The microstructure, composition distribution and micro-hardness of the Mo modified layers were analyzed. Contact angles on deionized water and wear behaviors of the samples against corundum balls in simulated human body fluids were investigated. Results show that the surface microhardness is significantly enhanced after alloying and increases with treated temperature rising, and the contact angles are lowered to some extent. More importantly, compared to as-received TLM alloy, the Mo modified samples, especially the one treated at 1000 °C, exhibit the significant improvement of tribological properties in reciprocating wear tests, with lower specific wear rate and friction coefficient. To conclude, Mo alloying treatment is an effective approach to obtain excellent comprehensive properties including optimal wear resistance and improved wettability, which ensure the lasting and safety application for titanium alloys as the biomedical implants.

  19. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    NASA Astrophysics Data System (ADS)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  20. TLM-PSD model for optimization of energy and power density of vertically aligned carbon nanotube supercapacitor

    PubMed Central

    Ghosh, Arunabha; Le, Viet Thong; Bae, Jung Jun; Lee, Young Hee

    2013-01-01

    Electrochemical capacitors with fast charging-discharging rates are very promising for hybrid electric vehicle industries including portable electronics. Complicated pore structures have been implemented in active materials to increase energy storage capacity, which often leads to degrade dynamic response of ions. In order to understand this trade-off phenomenon, we report a theoretical model based on transmission line model which is further combined with pore size distribution function. The model successfully explained how pores length, and pore radius of active materials and electrolyte conductivity can affect capacitance and dynamic performance of different capacitors. The powerfulness of the model was confirmed by comparing with experimental results of a micro-supercapacitor consisted of vertically aligned multiwalled carbon nanotubes (v-MWCNTs), which revealed a linear current increase up to 600 Vs−1 scan rate demonstrating an ultrafast dynamic behavior, superior to randomly entangled singlewalled carbon nanotube device, which is clearly explained by the theoretical model. PMID:24145831

  1. Antifungal and Ichthyotoxic Sesquiterpenoids from Santalum album Heartwood.

    PubMed

    Kim, Tae Hoon; Hatano, Tsutomu; Okamoto, Keinosuke; Yoshida, Takashi; Kanzaki, Hiroshi; Arita, Michiko; Ito, Hideyuki

    2017-07-08

    In our continuing study on a survey of biologically active natural products from heartwood of Santalum album (Southwest Indian origin), we newly found potent fish toxic activity of an n -hexane soluble extract upon primary screening using killifish (medaka) and characterized α-santalol and β-santalol as the active components. The toxicity (median tolerance limit (TLm) after 24 h at 1.9 ppm) of α-santalol was comparable with that of a positive control, inulavosin (TLm after 24 h at 1.3 ppm). These fish toxic compounds including inulavosin were also found to show a significant antifungal effect against a dermatophytic fungus, Trichophyton rubrum . Based on a similarity of the morphological change of the immobilized Trichophyton hyphae in scanning electron micrographs between treatments with α-santalol and griseofulvin (used as the positive control), inhibitory effect of α-santalol on mitosis (the antifungal mechanism proposed for griseofulvin) was assessed using sea urchin embryos. As a result, α-santalol was revealed to be a potent antimitotic agent induced by interference with microtubule assembly. These data suggested that α-santalol or sandalwood oil would be promising to further practically investigate as therapeutic agent for cancers as well as fungal skin infections.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Dong-Suk; Kang, Yu-Jin; Park, Jae-Hyung

    Highlights: • We developed and investigated source/drain electrodes in oxide TFTs. • The Mo S/D electrodes showed good output characteristics. • Intrinsic TFT parameters were calculated by the transmission line method. - Abstract: This paper investigates the feasibility of a low-resistivity electrode material (Mo) for source/drain (S/D) electrodes in thin film transistors (TFTs). The effective resistances between Mo source/drain electrodes and amorphous zinc–tin-oxide (a-ZTO) thin film transistors were studied. Intrinsic TFT parameters were calculated by the transmission line method (TLM) using a series of TFTs with different channel lengths measured at a low source/drain voltage. The TFTs fabricated with Momore » source/drain electrodes showed good transfer characteristics with a field-effect mobility of 10.23 cm{sup 2}/V s. In spite of slight current crowding effects, the Mo source/drain electrodes showed good output characteristics with a steep rise in the low drain-to-source voltage (V{sub DS}) region.« less

  3. ArtDeco: a beam-deconvolution code for absolute cosmic microwave background measurements

    NASA Astrophysics Data System (ADS)

    Keihänen, E.; Reinecke, M.

    2012-12-01

    We present a method for beam-deconvolving cosmic microwave background (CMB) anisotropy measurements. The code takes as input the time-ordered data along with the corresponding detector pointings and known beam shapes, and produces as output the harmonic aTlm, aElm, and aBlm coefficients of the observed sky. From these one can derive temperature and Q and U polarisation maps. The method is applicable to absolute CMB measurements with wide sky coverage, and is independent of the scanning strategy. We tested the code with extensive simulations, mimicking the resolution and data volume of Planck 30 GHz and 70 GHz channels, but with exaggerated beam asymmetry. We applied it to multipoles up to l = 1700 and examined the results in both pixel space and harmonic space. We also tested the method in presence of white noise. The code is released under the terms of the GNU General Public License and can be obtained from http://sourceforge.net/projects/art-deco/

  4. On the role of periodic structures in the lower jaw of the atlantic bottlenose dolphin (Tursiops truncatus).

    PubMed

    Dible, S A; Flint, J A; Lepper, P A

    2009-03-01

    This paper proposes the application of band-gap theory to hearing in the atlantic bottlenose dolphin (Tursiops truncatus). Using the transmission line modelling (TLM) technique and published computed tomography (CT) data of an atlantic bottlenose dolphin (Tursiops truncatus), a series of sound propagation experiments have been carried out. It is shown that the teeth in the lower jaw can be viewed as a periodic array of scattering elements which result in the formation of an acoustic stop band (or band gap) that is angular dependent. It is shown through simple and complex geometry simulations that performance enhancements such as improved gain and isolation between the two receive paths can be achieved. This mechanism has the potential to be exploited in direction-finding sonar.

  5. A Review of Communications Satellites and Related Spacecraft for Factors Influencing Mission Success. Volume 2

    DTIC Science & Technology

    1975-11-17

    and control (subsystem) COMM., comm AEC Atomic Energy Commission comsat AFB Air Force Base COMSTAR ACE A-hr aerospace ground equipment ampere...array TDA Satellite Assembly Building TDAL Space and Missile Systems Organization (U.S. Air Force) TDM THIR satellite communications system TI...Satellite Control Facility (U.S. Air Force) TIROS selective chopper radiometer TLM, T/M surface composition mapping radiometer TOS TRUST

  6. Improvements in High Resolution Laryngeal Magnetic Resonance Imaging for Preoperative Transoral Laser Microsurgery and Radiotherapy Considerations in Early Lesions

    PubMed Central

    Ruytenberg, Thomas; Verbist, Berit M.; Vonk-Van Oosten, Jordi; Astreinidou, Eleftheria; Sjögren, Elisabeth V.; Webb, Andrew G.

    2018-01-01

    As the benefits, limitations, and contraindications of transoral laser microsurgery (TLM) in glottic carcinoma treatments become better defined, pretreatment imaging has become more important to assess the case-specific suitability of TLM and to predict functional outcomes both for treatment consideration and patient counseling. Magnetic resonance imaging (MRI) is the preferred modality to image such laryngeal tumors, even though imaging the larynx using MRI can be difficult. The first challenge is that there are no commercial radiofrequency (RF) coils that are specifically designed for imaging the larynx, and performance in terms of coverage and signal-to-noise ratio is compromised using general-purpose RF coils. Second, motion in the neck region induced by breathing, swallowing, and vessel pulsation can induce severe image artifacts, sometimes rendering the images unusable. In this paper, we design a dedicated RF coil array, which allows high quality high-resolution imaging of the larynx. In addition, we show that introducing respiratory-triggered acquisition improves the diagnostic quality of the images by minimizing breathing and swallowing artifacts. Together, these developments enable robust, essentially artifact-free images of the full larynx with an isotropic resolution of 1 mm to be acquired within a few minutes. PMID:29928638

  7. A Study on Characterization of Light-Induced Electroless Plated Ni Seed Layer and Silicide Formation for Solar Cell Application

    NASA Astrophysics Data System (ADS)

    Takaloo, Ashkan Vakilipour; Joo, Seung Ki; Es, Firat; Turan, Rasit; Lee, Doo Won

    2018-03-01

    Light-induced electroless plating (LIEP) is an easy and inexpensive method that has been widely used for seed layer deposition of Nickel/Copper (Ni/Cu)-based metallization in the solar cell. In this study, material characterization aspects of the Ni seed layer and Ni silicide formation at different bath conditions and annealing temperatures on the n-side of a silicon diode structure have been examined to achieve the optimum cell contacts. The effects of morphology and chemical composition of Ni film on its electrical conductivity were evaluated and described by a quantum mechanical model. It has been found that correlation exists between the theoretical and experimental conductivity of Ni film. Residual stress and phase transformation of Ni silicide as a function of annealing temperature were evaluated using Raman and XRD techniques. Finally, transmission line measurement (TLM) technique was employed to determine the contact resistance of Ni/Si stack after thermal treatment and to understand its correlation with the chemical-structural properties. Results indicated that low electrical resistive mono-silicide (NiSi) phase as low as 5 mΩ.cm2 was obtained.

  8. Transoral laser microsurgery for locally advanced (T3-T4a) supraglottic squamous cell carcinoma: Sixteen years of experience.

    PubMed

    Vilaseca, Isabel; Blanch, José Luis; Berenguer, Joan; Grau, Juan José; Verger, Eugenia; Muxí, África; Bernal-Sprekelsen, Manuel

    2016-07-01

    Controversy exists regarding treatment of advanced laryngeal cancer. The purpose of this study was to evaluate the oncologic and functional outcomes of T3 to T4a supraglottic squamous carcinomas treated with transoral laser microsurgery (TLM). We conducted a retrospective analysis from an SPSS database. Primary outcomes were: locoregional control, overall survival (OS), disease-specific survival (DSS), laryngectomy-free survival, and function-preservation rates. Secondary objectives were: rate of tracheostomies and gastrostomies according to age. Risk factors for local control and larynx preservation were also evaluated. One hundred fifty-four consecutive patients were chosen for this study. Median follow-up was 40.7 + /- 32.8 months. Five and 10-year OS, DSS, and laryngectomy-free survival were 55.6% and 47%, 67.6% and 58.6%, and 75.2% and 59.5%, respectively. Paraglottic involvement was an independent factor for larynx preservation. Six patients (3.9%) needed a definitive tracheostomy, a gastrostomy, or both. The gastrostomy rate was higher in the group of patients above 65 years of age (p = .03). Five-year laryngectomy-free survival with preserved function was 74.5%. TLM constitutes a true alternative for organ preservation in locally advanced supraglottic carcinomas with good oncologic and functional outcomes. © 2016 Wiley Periodicals, Inc. Head Neck 38: 1050-1057, 2016. © 2016 Wiley Periodicals, Inc.

  9. Predicting surface vibration from underground railways through inhomogeneous soil

    NASA Astrophysics Data System (ADS)

    Jones, Simon; Hunt, Hugh

    2012-04-01

    Noise and vibration from underground railways is a major source of disturbance to inhabitants near subways. To help designers meet noise and vibration limits, numerical models are used to understand vibration propagation from these underground railways. However, the models commonly assume the ground is homogeneous and neglect to include local variability in the soil properties. Such simplifying assumptions add a level of uncertainty to the predictions which is not well understood. The goal of the current paper is to quantify the effect of soil inhomogeneity on surface vibration. The thin-layer method (TLM) is suggested as an efficient and accurate means of simulating vibration from underground railways in arbitrarily layered half-spaces. Stochastic variability of the soil's elastic modulus is introduced using a K-L expansion; the modulus is assumed to have a log-normal distribution and a modified exponential covariance kernel. The effect of horizontal soil variability is investigated by comparing the stochastic results for soils varied only in the vertical direction to soils with 2D variability. Results suggest that local soil inhomogeneity can significantly affect surface velocity predictions; 90 percent confidence intervals showing 8 dB averages and peak values up to 12 dB are computed. This is a significant source of uncertainty and should be considered when using predictions from models assuming homogeneous soil properties. Furthermore, the effect of horizontal variability of the elastic modulus on the confidence interval appears to be negligible. This suggests that only vertical variation needs to be taken into account when modelling ground vibration from underground railways.

  10. In-plane isotropic magnetic and electrical properties of MnAs/InAs/GaAs (111) B hybrid structure

    NASA Astrophysics Data System (ADS)

    Islam, Md. Earul; Akabori, Masashi

    2018-03-01

    We characterized in-plane magnetic and electrical properties of MnAs/InAs/GaAs (111) B hybrid structure grown by molecular beam epitaxy (MBE). We observed isotropic easy magnetization in two crystallographic in-plane directions, [ 2 ̅ 110 ] and [ 0 1 ̅ 10 ] of hexagonal MnAs i.e. [ 1 ̅ 10 ] and [ 11 2 ̅ ] of cubic InAs. We also fabricated transmission line model (TLM) devices, and observed almost isotropic electrical properties in two crystallographic in-plane directions, [ 1 ̅ 10 ] and [ 11 2 ̅ ] of cubic InAs. Also we tried to fabricate and characterize lateral spin-valve (LSV) devices from the hybrid structure. We could roughly estimate the spin injection efficiency and the spin diffusion length at room temperature in [ 11 2 ̅ ] direction. We believe that the hybrid structures are helpful to design spintronic device with good flexibility in-plane.

  11. Functional swallowing outcomes following treatment for oropharyngeal carcinoma: a systematic review of the evidence comparing trans-oral surgery versus non-surgical management.

    PubMed

    Dawe, N; Patterson, J; O'Hara, J

    2016-08-01

    Trans-oral surgical and non-surgical management options for oropharyngeal squamous cell carcinoma (OPSCC) appear to offer similar survival outcomes. Functional outcomes, in particular swallowing, have become of increasing interest in the debate regarding treatment options. Contemporary reviews on function following treatment frequently include surrogate markers and limit the value of comparative analysis. A systematic review was performed to establish whether direct comparisons of swallowing outcomes could be made between trans-oral surgical approaches (trans-oral laser microsurgery (TLM)/trans-oral robotic surgery (TORS)) and (chemo)radiotherapy ((C)RT). Systematic review. MEDLINE, Embase and Cochrane databases were interrogated using the following MeSH terms: antineoplastic protocols, chemotherapy, radiotherapy, deglutition disorders, swallowing, lasers, and trans-oral surgery. Two authors performed independent systematic reviews and consensus was sought if opinions differed. The WHO ICF classification was applied to generate analysis based around body functions and structure, activity limitations and participation restriction. Thirty-seven citations were included in the analysis. Twenty-six papers reported the outcomes for OPSCC treatment following primary (C)RT in 1377 patients, and 15 papers following contemporary trans-oral approaches in 768 patients. Meta-analysis was not feasible due to varying methodology and heterogeneity of outcome measures. Instrumental swallowing assessments were presented in 13/26 (C)RT versus 2/15 TLM/TORS papers. However, reporting methods of these studies were not standardised. This variety of outcome measures and the wide-ranging intentions of authors applying the measures in individual studies limit any practical direct comparisons of the effects of treatment on swallowing outcomes between interventions. From the current evidence, no direct comparisons could be made of swallowing outcomes between the surgical and non-surgical modalities. Swallowing is a multidimensional construct, and the range of assessments utilised by authors reflects the variety of available reporting methods. The MD Anderson Dysphagia Inventory is a subjective measure that allows limited comparison between the currently available heterogeneous data, and is explored in detail. The findings highlight that further research may identify the most appropriate tools for measuring swallowing in patients with OPSCC. Consensus should allow their standardised integration into future studies and randomised control trials. © 2015 John Wiley & Sons Ltd.

  12. A Digital Methodology for the Design Process of Aerospace Assemblies with Sustainable Composite Processes & Manufacture

    NASA Astrophysics Data System (ADS)

    McEwan, W.; Butterfield, J.

    2011-05-01

    The well established benefits of composite materials are driving a significant shift in design and manufacture strategies for original equipment manufacturers (OEMs). Thermoplastic composites have advantages over the traditional thermosetting materials with regards to sustainability and environmental impact, features which are becoming increasingly pertinent in the aerospace arena. However, when sustainability and environmental impact are considered as design drivers, integrated methods for part design and product development must be developed so that any benefits of sustainable composite material systems can be assessed during the design process. These methods must include mechanisms to account for process induced part variation and techniques related to re-forming, recycling and decommissioning, which are in their infancy. It is proposed in this paper that predictive techniques related to material specification, part processing and product cost of thermoplastic composite components, be integrated within a Through Life Management (TLM) product development methodology as part of a larger strategy of product system modeling to improve disciplinary concurrency, realistic part performance, and to place sustainability at the heart of the design process. This paper reports the enhancement of digital manufacturing tools as a means of drawing simulated part manufacturing scenarios, real time costing mechanisms, and broader lifecycle performance data capture into the design cycle. The work demonstrates predictive processes for sustainable composite product manufacture and how a Product-Process-Resource (PPR) structure can be customised and enhanced to include design intent driven by `Real' part geometry and consequent assembly. your paper.

  13. Cost comparison of open approach, transoral laser microsurgery and transoral robotic surgery for partial and total laryngectomies.

    PubMed

    Dombrée, Manon; Crott, Ralph; Lawson, Georges; Janne, Pascal; Castiaux, Annick; Krug, Bruno

    2014-10-01

    Activity-based costing is used to give a better insight into the actual cost structure of open, transoral laser microsurgery (TLM) and transoral robotic surgery (TORS) supraglottic and total laryngectomies. Cost data were obtained from hospital administration, personnel and vendor structured interviews. A process map identified 17 activities, to which the detailed cost data are related. One-way sensitivity analyses on the patient throughput, the cost of the equipment or operative times were performed. The total cost for supraglottic open (135-203 min), TLM (110-210 min) and TORS (35-130 min) approaches were 3,349 euro (3,193-3,499 euro), 3,461 euro (3,207-3,664 euro) and 5,650 euro (4,297-5,974 euro), respectively. For total laryngectomy, the overall cost were 3,581 euro (3,215-3,846 euro) for open and 6,767 euro (6,418-7,389 euro) for TORS. TORS cost is mostly influenced by equipment (54%) where the other procedures are predominantly determined by personnel cost (about 45%). Even when we doubled the yearly case-load, used the shortest operative times or a calculation without robot equipment costs we did not reach cost equivalence. TORS is more expensive than standard approaches and mainly influenced by purchase and maintenance costs and the use of proprietary instruments. Further trials on long-term outcomes and costs following TORS are needed to evaluate its cost-effectiveness.

  14. Two dimensional simulation of patternable conducting polymer electrode based organic thin film transistor

    NASA Astrophysics Data System (ADS)

    Nair, Shiny; Kathiresan, M.; Mukundan, T.

    2018-02-01

    Device characteristics of organic thin film transistor (OTFT) fabricated with conducting polyaniline:polystyrene sulphonic acid (PANi-PSS) electrodes, patterned by the Parylene lift-off method are systematically analyzed by way of two dimensional numerical simulation. The device simulation was performed taking into account field-dependent mobility, low mobility layer at the electrode-semiconductor interface, trap distribution in pentacene film and trapped charge at the organic/insulator interface. The electrical characteristics of bottom contact thin film transistor with PANi-PSS electrodes and pentacene active material is superior to those with palladium electrodes due to a lower charge injection barrier. Contact resistance was extracted in both cases by the transfer line method (TLM). The extracted charge concentration and potential profile from the two dimensional numerical simulation was used to explain the observed electrical characteristics. The simulated device characteristics not only matched the experimental electrical characteristics, but also gave an insight on the charge injection, transport and trap properties of the OTFTs as a function of different electrode materials from the perspectives of transistor operation.

  15. History of the 4950th Test Group (N)

    DTIC Science & Technology

    1957-10-15

    57, ~ ~ 1 3 . 1 Concept of Air Tamk htp fOg20tl~m BPgp0-n Operation MPDTACT. 33. &el. 7r EXIHIEO Pirul Report b u r a t r , p. 2. 9. 8.0 pp...end t h a t t h i s headquarters would take no action which could be interpreted as attempting t o establish such policy. Consequently, this...ing supplies, PX and sales stores. a. Units of Task Groun 7.4 w i l l receive suppliee in accordance with procedures outlined in Air Foroe Manual

  16. Smoke and Obscurants; a Health and Environmental Effects Data Base Assessment. A First-Order, Environmental Screening and Ranking of Army Smokes and Obscurants

    DTIC Science & Technology

    1985-02-01

    SCREENING. A. Phosphorus smokes (P) 1. White phosphorus ( NP ) White phosphorus/felt wedges (WP/FW) Plasticized white phosphorus (PWP) White phosphorus...exposure. The oral LD50 value for all phosphorus smokes was estimated as that for orthophosphoric acid.22 When NP or RP are combusted. the reaction...WP and RP smoke types were of insigniflcant toxicity, see Table 8. The TLm96 values for NP and RP were values for orthophosphoric acid,22.26 and the

  17. Rational enhancement of second-order nonlinearity: bis-(4-methoxyphenyl)hetero-aryl-amino donor-based chromophores: design, synthesis, and electrooptic activity.

    PubMed

    Davies, Joshua A; Elangovan, Arumugasamy; Sullivan, Philip A; Olbricht, Benjamin C; Bale, Denise H; Ewy, Todd R; Isborn, Christine M; Eichinger, Bruce E; Robinson, Bruce H; Reid, Philip J; Li, Xiaosong; Dalton, Larry R

    2008-08-13

    Two new highly hyperpolarizable chromophores, based on N,N- bis-(4-methoxyphenyl) aryl-amino donors and phenyl-trifluoromethyl-tricyanofuran (CF3-Ph-TCF) acceptor linked together via pi-conjugation through 2,5-divinylenethienyl moieties as the bridge, have been designed and synthesized successfully for the first time. The aryl moieties on the donor side of the chromophore molecules were varied as to be thiophene and 1-n-hexylpyrrole. The linear and nonlinear optical (NLO) properties of all compounds were evaluated in addition to recording relevant thermal and electrochemical data. The properties of the two new molecules were comparatively studied. These results are critically analyzed along with two other compounds, reported earlier from our laboratories and our collaborator's, that contain (i) aliphatic chain-bearing aniline and (ii) dianisylaniline as donors, keeping the bridge (2,5-divinylenethienyl-), and the acceptor (CF3-Ph-TCF), constant. Trends in theoretically (density functional theory, DFT) predicted, zero-frequency gas-phase hyperpolarizability [beta(0;0,0)] values are shown to be consistent with the trends in beta HRS(-2omega;omega,omega), as measured by Hyper-Rayleigh Scattering (HRS), when corrected to zero-frequency using the two-level model (TLM) approximation. Similarly, trends in poling efficiency data (r33/E(p)) and wavelength dispersion measured by reflection ellipsometry (using a Teng-Man apparatus) and attenuated total reflection (ATR) are found to fit the TLM and DFT predictions. A 3-fold enhancement in bulk nonlinearity (r33) is realized as the donor subunits are changed from alkylaniline to dianisylaminopyrrole donors. The results of these studies provide insight into the complicated effects on molecular hyperpolarizability of substituting heteroaromatic subunits into the donor group structures. These studies also demonstrate that, when frequency dependence and electric-field-induced ordering behavior are correctly accounted for, ab initio DFT generated beta(0;0,0) is effective as a predictor of changes in r33 behavior based on chromophore structure modification. Thus DFT can provide valuable insight into the electronic structure origin of complex optical phenomena in organic media.

  18. Body composition and sarcopenia in patients with chronic obstructive pulmonary disease.

    PubMed

    Munhoz da Rocha Lemos Costa, Tatiana; Costa, Fabio Marcelo; Jonasson, Thaísa Hoffman; Moreira, Carolina Aguiar; Boguszewski, César Luiz; Borba, Victória Zeghbi Cochenski

    2018-04-01

    Changes in body composition are commonly present in chronic obstructive pulmonary disease (COPD). The main aim of this study were to evaluate changes in body composition and the prevalence of pre-sarcopenia and sarcopenia in patients with COPD, compared with two control groups and correlate these parameters with indices of COPD severity (VEF1 and GOLD) and prognosis (BODE). This was a cross-sectional study in COPD patients (DG) that undergone body composition assessment by DXA. Two control groups were used, smokers individuals without COPD (smokers group, SG), and healthy never smokers individuals (never smokers group, NSG). DG comprised 121 patients (65 women, mean age 67.9 ± 8.6 years). The percentage of total body fat mass (TFM) was significantly lower in DG in both genders, despite no difference in BMI. Both BMI and relative skeletal muscle mass index (RSMI) decreased according to the worsening of GOLD in men and women, as well as the TFM and total lean mass (TLM) in men. As BODE get worse, BMI and RSMI decreased in both sexes, as well as TLM in men. The prevalence of pre-sarcopenia in the DG was 46.3% and no different with controls. In DG 12.4% were sarcopenic. Patients with sarcopenia were older and had worse prognosis. Higher BODE prognostic index, higher the prevalence of sarcopenia (OR 3.5, 95% CI 1.06-11.56, p = 0.035). This study showed alterations in body composition parameters in patients with COPD. A high prevalence of sarcopenia and the association with worse prognostic index.

  19. Self-aligned Ni-P ohmic contact scheme for silicon solar cells by electroless deposition

    NASA Astrophysics Data System (ADS)

    Lee, Eun Kyung; Lim, Dong Chan; Lee, Kyu Hwan; Lim, Jae-Hong

    2012-08-01

    We report a Ni-P metallization scheme for low resistance ohmic contacts to n-type Si for silicon solar cells. As-deposited Ni-P contacts to n-type Si showed a specific contact resistance of 6.42 × 10-4 Ω·cm2. The specific contact resistance decreased with increasing thermal annealing temperature. When the Ni-P contact was annealed at 600°C for 30 min in ambient air, the specific contact resistance was greatly decreased, to 6.37 × 10-5Ω·cm2. The improved ohmic property was attributed to the decrease in the work function due to the formation of Ni-silicides from Ni in-diffusion during the thermal annealing process. Effects of the annealing process on the electrical and crystal properties of the contacts were investigated by means of various resistivity measurements (circular transmission line method (c-TLM), 4-point probe), glancing angle x-ray diffraction (GAXRD), and x-ray photoelectron spectroscopy (XPS).

  20. Low Temperature Ohmic Contact Formation of Ni2Si on N-type 4H-SiC and 6H-SiC

    NASA Technical Reports Server (NTRS)

    Elsamadicy, A. M.; Ila, D.; Zimmerman, R.; Muntele, C.; Evelyn, L.; Muntele, I.; Poker, D. B.; Hensley, D.; Hirvonen, J. K.; Demaree, J. D.; hide

    2001-01-01

    Nickel Silicide (Ni2Si) is investigated as possible ohmic contact to heavily nitrogen-doped N-type 4H-SiC and 6H-SiC. Nickel Silicide was deposited via electron gun with various thicknesses on both Si and C faces of the SiC substrates. The Ni2Si contacts were formed at room temperature as well as at elevated temperatures (400 to 1000 K). Contact resistivities and I-V characteristics were measured at temperatures between 100 and 700 C. To investigate the electric properties, I-V characteristics were studied and the Transmission Line Method (TLM) was used to determine the specific contact resistance for the samples at each annealing temperature. Both Rutherford Backscattering Spectroscopy (RBS) and Auger Electron Spectroscopy (AES) were used for depth profiling of the Ni2Si, Si, and C. X-ray Photoemission Spectroscopy (XPS) was used to study the chemical structure of the Ni2Si/SiC interface.

  1. Methode d'identification parametrique pour la surveillance in situ des joints a recouvrement par propagation d'ondes vibratoires

    NASA Astrophysics Data System (ADS)

    Francoeur, Dany

    Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages

  2. The 1985 ARI Survey of Army Recruits: Tabular Description of NPS (active) Army Accessions. Volume 2

    DTIC Science & Technology

    1987-04-01

    I «aa axpollaJ or auapandod I ftoutd a Job I lllwd or I «antod to Mrk Pull tlM. T MO borad , «aan’t laamlng anything uoaPul .... I get aarrlad ar...Go on an overnight hike . . . Tine up a car 128. Mow the graaa Balance a checkbook Plan a meal 130. Plan a menu for a party . . . Prepare the...no, Wiy not? 108. While attanding thla aehool, tc titom. If my/, of the following people did you talk to in deciding to anliet In the Any

  3. Dysfunction in gap junction intercellular communication induces aberrant behavior of the inner cell mass and frequent collapses of expanded blastocysts in mouse embryos.

    PubMed

    Togashi, Kazue; Kumagai, Jin; Sato, Emiko; Shirasawa, Hiromitsu; Shimoda, Yuki; Makino, Kenichi; Sato, Wataru; Kumazawa, Yukiyo; Omori, Yasufumi; Terada, Yukihiro

    2015-06-01

    We investigated the role of gap junctions (GJs) in embryological differentiation, and observed the morphological behavior of the inner cell mass (ICM) by time-lapse movie observation (TLM) with gap junction inhibitors (GJis). ICR mouse embryos were exposed to two types of GJis in CZB medium: oleamide (0 to 50 μM) and 1-heptanol (0 to 10 mM). We compared the rate of blastocyst formation at embryonic day 4.5 (E4.5) with E5.5. We also observed and evaluated the times from the second cleavage to each embryonic developing stage by TLM. We investigated embryonic distribution of DNA, Nanog protein, and Connexin 43 protein with immunofluorescent staining. In the comparison of E4.5 with E5.5, inhibition of gap junction intercellular communication (GJIC) delayed embryonic blastocyst formation. The times from the second cleavage to blastocyst formation were significantly extended in the GJi-treated embryos (control vs with oleamide, 2224 ± 179 min vs 2354 ± 278 min, p = 0.013). Morphological differences were traced in control versus GJi-treated embryos until the hatching stage. Oleamide induced frequent severe collapses of expanded blastocysts (77.4 % versus 26.3 %, p = 0.0001) and aberrant ICM divisions connected to sticky strands (74.3 % versus 5.3 %, p = 0.0001). Immunofluorescent staining indicated Nanog-positive cells were distributed in each divided ICM. GJIC plays an important role in blastocyst formation, collapses of expanded blastocysts, and the ICM construction in mouse embryos.

  4. Identification of Thiotetronic Acid Antibiotic Biosynthetic Pathways by Target-directed Genome Mining.

    PubMed

    Tang, Xiaoyu; Li, Jie; Millán-Aguiñaga, Natalie; Zhang, Jia Jia; O'Neill, Ellis C; Ugalde, Juan A; Jensen, Paul R; Mantovani, Simone M; Moore, Bradley S

    2015-12-18

    Recent genome sequencing efforts have led to the rapid accumulation of uncharacterized or "orphaned" secondary metabolic biosynthesis gene clusters (BGCs) in public databases. This increase in DNA-sequenced big data has given rise to significant challenges in the applied field of natural product genome mining, including (i) how to prioritize the characterization of orphan BGCs and (ii) how to rapidly connect genes to biosynthesized small molecules. Here, we show that by correlating putative antibiotic resistance genes that encode target-modified proteins with orphan BGCs, we predict the biological function of pathway specific small molecules before they have been revealed in a process we call target-directed genome mining. By querying the pan-genome of 86 Salinispora bacterial genomes for duplicated house-keeping genes colocalized with natural product BGCs, we prioritized an orphan polyketide synthase-nonribosomal peptide synthetase hybrid BGC (tlm) with a putative fatty acid synthase resistance gene. We employed a new synthetic double-stranded DNA-mediated cloning strategy based on transformation-associated recombination to efficiently capture tlm and the related ttm BGCs directly from genomic DNA and to heterologously express them in Streptomyces hosts. We show the production of a group of unusual thiotetronic acid natural products, including the well-known fatty acid synthase inhibitor thiolactomycin that was first described over 30 years ago, yet never at the genetic level in regards to biosynthesis and autoresistance. This finding not only validates the target-directed genome mining strategy for the discovery of antibiotic producing gene clusters without a priori knowledge of the molecule synthesized but also paves the way for the investigation of novel enzymology involved in thiotetronic acid natural product biosynthesis.

  5. Studies on Phase Shifting Mechanism in Pulse Tube Cryocooler

    NASA Astrophysics Data System (ADS)

    Padmanabhan; Gurudath, C. S.; Srikanth, Thota; Ambirajan, A.; Basavaraj, SA; Dinesh, Kumar; Venkatarathnam, G.

    2017-02-01

    Pulse Tube cryocoolers (PTC) are being used extensively in spacecraft for applications such as sensor cooling due to their simple construction and long life owing to a fully passive cold head. Efforts at ISRO to develop a PTC for space use have resulted in a unit with a cooling capacity of 1W at 80K with an input of 45watts. This paper presents the results of a study with this PTC on the phase shifting characteristics of an Inertance tube in conjunction with a reservoir. The aim was to obtain an optimum phase angle between the mass flow (ṁ) and dynamic pressure (\\tilde p) at the PT cold end that results in the largest possible heat lift from this unit. Theoretical model was developed using Phasor Analysis and Transmission Line Model (TLM) for different mass flow and values of optimum frequency and phase angles were predicted. They were compared with experimental data from the PTC for different configurations of the Inertance tube/reservoir at various frequencies and charge pressures. These studies were carried out to characterise an existing cryocooler and design an optimised phase shifter with the aim of improving the performance with respect to specific power input.

  6. Proceedings of NATO Advanced Research Workshop on the Formation, Transport and Consequences of Particles in Plasmas Held in Castera-Verduzan, France on 30 August-3 September 1993

    DTIC Science & Technology

    1993-09-03

    ustl ill .I uclaisum ’cis iciths fim icr jc ltcic a~ It IiI, i cumin lc ct. ,il filmgm Ii mgiir tcc .uiictcc I ill \\c~tlm i the~ c K swienccc ,t c...when gas- rticl lieonly counts are low, is cvidence that thermal stress plays Elecrodean important role. Even a short idle period for the system will...Ai fcr s, slitem Since tlat, implies ih;;i thermal stress would he- Ibis phenomntion bhis been insvesirgatet anid not added lomher. at direct plasilla

  7. High-performance metal mesh/graphene hybrid films using prime-location and metal-doped graphene.

    PubMed

    Min, Jung-Hong; Jeong, Woo-Lim; Kwak, Hoe-Min; Lee, Dong-Seon

    2017-08-31

    We introduce high-performance metal mesh/graphene hybrid transparent conductive layers (TCLs) using prime-location and metal-doped graphene in near-ultraviolet light-emitting diodes (NUV LEDs). Despite the transparency and sheet resistance values being similar for hybrid TCLs, there were huge differences in the NUV LEDs' electrical and optical properties depending on the location of the graphene layer. We achieved better physical stability and current spreading when the graphene layer was located beneath the metal mesh, in direct contact with the p-GaN layer. We further improved the contact properties by adding a very thin Au mesh between the thick Ag mesh and the graphene layer to produce a dual-layered metal mesh. The Au mesh effectively doped the graphene layer to create a p-type electrode. Using Raman spectra, work function variations, and the transfer length method (TLM), we verified the effect of doping the graphene layer after depositing a very thin metal layer on the graphene layers. From our results, we suggest that the nature of the contact is an important criterion for improving the electrical and optical performance of hybrid TCLs, and the method of doping graphene layers provides new opportunities for solving contact issues in other semiconductor devices.

  8. Mechanisms of antimony adsorption onto soybean stover-derived biochar in aqueous solutions.

    PubMed

    Vithanage, Meththika; Rajapaksha, Anushka Upamali; Ahmad, Mahtab; Uchimiya, Minori; Dou, Xiaomin; Alessi, Daniel S; Ok, Yong Sik

    2015-03-15

    Limited mechanistic knowledge is available on the interaction of biochar with trace elements (Sb and As) that exist predominantly as oxoanions. Soybean stover biochars were produced at 300 °C (SBC300) and 700 °C (SBC700), and characterized by BET, Boehm titration, FT-IR, NMR and Raman spectroscopy. Bound protons were quantified by potentiometric titration, and two acidic sites were used to model biochar by the surface complexation modeling based on Boehm titration and NMR observations. The zero point of charge was observed at pH 7.20 and 7.75 for SBC300 and SBC700, respectively. Neither antimonate (Sb(V)) nor antimonite (Sb(III)) showed ionic strength dependency (0.1, 0.01 and 0.001 M NaNO3), indicating inner sphere complexation. Greater adsorption of Sb(III) and Sb(V) was observed for SBC300 having higher -OH content than SBC700. Sb(III) removal (85%) was greater than Sb(V) removal (68%). Maximum adsorption density for Sb(III) was calculated as 1.88 × 10(-6) mol m(-2). The Triple Layer Model (TLM) successfully described surface complexation of Sb onto soybean stover-derived biochar at pH 4-9, and suggested the formation of monodentate mononuclear and binuclear complexes. Spectroscopic investigations by Raman, FT-IR and XPS further confirmed strong chemisorptive binding of Sb to biochar surfaces. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Treatment, Conservation and Restoration of the Bedouin Dyed Textiles in the Museum of Jordanian Heritage.

    NASA Astrophysics Data System (ADS)

    Abdel-Kareem, O.; Alfaisal, R.

    This study aims to establish and design effective methods to conserve two Bedouin dyed textile objects selected from the museum of Jordanian heritage and to improve the physical and environmental conditions in which items are kept to optimize their longterm chances of survival. The conservation processes that were used in conservation of the selected objects can be used a guide for conservators to conserve other similar textile objects. Investigations and analysis were used to identify the fibers and the extent of deterioration by using noninvasive methods. Transmitted Light Microscopy (TLM) and Scanning Electron Microscopy associated with EDAX (SEM-EDAX) were used for identifying the fibers and the deterioration. The results showed that the textile artifacts studied were very dirty, had white spots occupying cavities and holes, wrinkles and creases, fiber damages. Previous damage may due to the improper display methods in the museum or due to the incompatible environmental conditions surrounded the artifacts during exhibition such as: light, temperature, relative humidity, pollutants and microorganisms. For these reasons, the textile objects were cleaned using wet cleaning methods that improved the physical and mechanical properties of textile objects and returned them to their original shape as much as possible. Then the textile objects were mounted and supported by stitching on to backing fabric stretched on wooden frames. Finally, and according to the requirements of the museum, the objects were displayed temporarily inside showcases in an aesthetically pleasing manner.

  10. An Investigation of Feasibility and Safety of Bi-Modal Stimulation for the Treatment of Tinnitus: An Open-Label Pilot Study.

    PubMed

    Hamilton, Caroline; D'Arcy, Shona; Pearlmutter, Barak A; Crispino, Gloria; Lalor, Edmund C; Conlon, Brendan J

    2016-12-01

    Tinnitus is the perception of sound in the absence of an external auditory stimulus. It is widely believed that tinnitus, in patients with associated hearing loss, is a neurological phenomenon primarily affecting the central auditory structures. However, there is growing evidence for the involvement of the somatosensory system in this form of tinnitus. For this reason it has been suggested that the condition may be amenable to bi-modal stimulation of the auditory and somatosensory systems. We conducted a pilot study to investigate the feasibility and safety of a device that delivers simultaneous auditory and somatosensory stimulation to treat the symptoms of chronic tinnitus. A cohort of 54 patients used the stimulation device for 10 weeks. Auditory stimulation was delivered via headphones and somatosensory stimulation was delivered via electrical stimulation of the tongue. Patient usage, logged by the device, was used to classify patients as compliant or noncompliant. Safety was assessed by reported adverse events and changes in tinnitus outcome measures. Response to treatment was assessed using tinnitus outcome measures: Minimum Masking Level (MML), Tinnitus Loudness Matching (TLM), and Tinnitus Handicap Inventory (THI). The device was well tolerated by patients and no adverse events or serious difficulties using the device were reported. Overall, 68% of patients met the defined compliance threshold. Compliant patients (N = 30) demonstrated statistically significant improvements in mean outcome measures after 10 weeks of treatment: THI (-11.7 pts, p < 0.001), TLM (-7.5dB, p < 0.001), and MML (-9.7dB, p < 0.001). The noncompliant group (N = 14) demonstrated no statistical improvements. This study demonstrates the feasibility and safety of a new bi-modal stimulation device and supports the potential efficacy of this new treatment for tinnitus. © 2016 Neuromod Devices Ltd. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.

  11. ALOX12 polymorphisms are associated with fat mass but not peak bone mineral density in Chinese nuclear families.

    PubMed

    Xiao, W-J; He, J-W; Zhang, H; Hu, W-W; Gu, J-M; Yue, H; Gao, G; Yu, J-B; Wang, C; Ke, Y-H; Fu, W-Z; Zhang, Z-L

    2011-03-01

    Arachidonate 12-lipoxygenase (ALOX12) is a member of the lipoxygenase superfamily, which catalyzes the incorporation of molecular oxygen into polyunsaturated fatty acids. The products of ALOX12 reactions serve as endogenous ligands for peroxisome proliferator-activated receptor γ (PPARG). The activation of the PPARG pathway in marrow-derived mesenchymal progenitors stimulates adipogenesis and inhibits osteoblastogenesis. Our objective was to determine whether polymorphisms in the ALOX12 gene were associated with variations in peak bone mineral density (BMD) and obesity phenotypes in young Chinese men. All six tagging single-nucleotide polymorphisms (SNPs) in the ALOX12 gene were genotyped in a total of 1215 subjects from 400 Chinese nuclear families by allele-specific polymerase chain reaction. The BMD at the lumbar spine and hip, total fat mass (TFM) and total lean mass (TLM) were measured using dual-energy X-ray absorptiometry. The pairwise linkage disequilibrium among SNPs was measured, and the haplotype blocks were inferred. Both the individual SNP markers and the haplotypes were tested for an association with the peak BMD, body mass index, TFM, TLM and percentage fat mass (PFM) using the quantitative transmission disequilibrium test (QTDT). Using the QTDT, significant within-family association was found between the rs2073438 polymorphism in the ALOX12 gene and the TFM and PFM (P=0.007 and 0.012, respectively). Haplotype analyses were combined with our individual SNP results and remained significant even after correction for multiple testing. However, we failed to find significant within-family associations between ALOX12 SNPs and the BMD at any bone site in young Chinese men. Our present results suggest that the rs2073438 polymorphism of ALOX12 contributes to the variation of obesity phenotypes in young Chinese men, although we failed to replicate the association with the peak BMD variation in this sample. Further independent studies are needed to confirm our findings.

  12. Special Issue on a Fault Tolerant Network on Chip Architecture

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan

    2010-06-01

    In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.

  13. Usability Operations on Touch Mobile Devices for Users with Autism.

    PubMed

    Quezada, Angeles; Juárez-Ramírez, Reyes; Jiménez, Samantha; Noriega, Alan Ramírez; Inzunza, Sergio; Garza, Arnulfo Alanis

    2017-10-14

    The Autistic Spectrum Disorder is a cognitive disorder that affects the cognitive and motor skills; due that, users cannot perform digital and fine motor tasks. It is necessary to create software applications that adapt to the abilities of these users. In recent years has been an increase in the research of the use of technology to support autistic users to develop their communication skills and to improve learning. However, the applications' usability for disable users is not assessed objectively as the existing models do not consider interaction operators for disable users. This article focuses on identifying the operations that can easily be performed by autistic users following the metrics of KML-GOMS, TLM and FLM. In addition, users of typical development were included in order to do a comparison between both types of users. The experiment was carried out using four applications designed for autistic users. Participants were subjects divided in two groups: level 1 and level 2 autistic users, and a group of users of typical development. During the experimentation, users performed a use case for each application; the time needed to perform each task was measured. Results show that the easiest operations for autistic users are K (Keystroke), D (Drag), Initial Act (I) and Tapping (T).

  14. Toxicity and sublethal effects of No. 2 fuel oil on the supralittoral isopod Lygia exotica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, T.M.; Neff, J.M.; Warner, J.S.

    1978-09-01

    No. 2 fuel oil was of relatively low toxicity to the intertidal isopod Lygia exotica as indicated by the TLm values of over 100% for the WSF and 73 ppM at 24 and 48 hours and 36.5 ppM at 96 hours for the OWD. Respiration was not significantly affected by short term exposure to several concentrations of No. 2 fuel oil prepared as either a WSF or OWD. Lygia contaminated by a spill of No. 2 fuel oil and Bunker C residual oil contained high concentrations of dibenzothiophenes. It is not known whether the dibenzothiophenes were accumulated by the Lygiamore » tissues or absorbed to the exoskeleton. Therefore, the high mortality of Lygia following the spill cannot yet be attributed to the dibenzothiophenes.« less

  15. Nature vs nurture: interplay between the genetic control of telomere length and environmental factors.

    PubMed

    Harari, Yaniv; Romano, Gal-Hagit; Ungar, Lior; Kupiec, Martin

    2013-11-15

    Telomeres are nucleoprotein structures that cap the ends of the linear eukaryotic chromosomes, thus protecting their stability and integrity. They play important roles in DNA replication and repair and are central to our understanding of aging and cancer development. In rapidly dividing cells, telomere length is maintained by the activity of telomerase. About 400 TLM (telomere length maintenance) genes have been identified in yeast, as participants of an intricate homeostasis network that keeps telomere length constant. Two papers have recently shown that despite this extremely complex control, telomere length can be manipulated by external stimuli. These results have profound implications for our understanding of cellular homeostatic systems in general and of telomere length maintenance in particular. In addition, they point to the possibility of developing aging and cancer therapies based on telomere length manipulation.

  16. Observations and Laboratory Data of Planetary Organics

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.

    2002-01-01

    Many efforts are underway to search for evidence of prebiotic materials in the outer solar system. Current and planned Mars missions obtain remote sensing observations that can be used to address the potential presence of prebiotic materials. Additional missions to, and continuing earth-based observations of, more distant solar system objects will also provide remote sensing observations that can be used to address the potential presence of prebiotic materials. I will present an overview of on-going observations, associated laboratory investigations of candidate materials, and theoretical modeling of observational data. In the past the room temperature reflectance spectra of many residues created from HC-bearing gases and solids have been reported. The results of an investigation of what effect temperatures more representative of outer solar system surfaces (50-140K) have on the reflectance spectra of these residues, and the associated interpretations, will be presented. The relatively organic-rich Tagish Lake Meteorite has been suggested as a spectral analog for Dtype asteroids. Using a new approach that relies upon iterative use of Hapke theory and Kraniers-Kronig analysis the optical constants of TLM were estimated. The approach and results of the analysis will be presented. Use of optical constants in scattering theories, such as the Hapke theory, provide the ability to determine quantitative estimates of the relative abundances and grain sizes of candidate surface components. This approach has been applied to interpret the reflectance spectra of several outer solar system surfaces. A summary will be provided describing the results of such modeling efforts.

  17. Feeding behavior and temperature and light tolerance of Mysis relicta in the laboratory

    USGS Publications Warehouse

    DeGraeve, G.M.; Reynolds, James B.

    1975-01-01

    Live specimens of Mysis relicta from Lake Michigan were held for one year in the laboratory to determine feeding behavior and tolerance to light and temperature. Mysids fed by moving with rapid, horizontal jerking motions toward food as it settled toward the bottom and by swimming slowly, upside down, to gather particles floating on the surface. Scavenging was common. Mysids tolerated considerably higher temperatures than previously reported. Temperature increases (from 5 C) of 1 C per day and 1 C per minute resulted in TLm values of 20.5 C and 20.4 C, respectively. Mortality increased rapidly at temperatures above 13 C. The upper lethal limit for mysids acclimated to 5 C was about 22 C. Survival under continuous, high light intensity (32 foot-candles) was considerably higher than previously reported. Low water temperature (5 C) may have increased light tolerance.

  18. Cumulative dose 60Co gamma irradiation effects on AlGaN/GaN Schottky diodes and its area dependence

    NASA Astrophysics Data System (ADS)

    Sharma, Chandan; Laishram, Robert; Rawal, Dipendra Singh; Vinayak, Seema; Singh, Rajendra

    2018-04-01

    Cumulative dose gamma radiation effects on current-voltage characteristics of GaN Schottky diodes have been investigated. The different area diodes have been fabricated on AlGaN/GaN high electron mobility transistor (HEMT) epi-layer structure grown over SiC substrate and irradiated with a dose up to the order of 104 Gray (Gy). Post irradiation characterization shows a shift in the turn-on voltage and improvement in reverse leakage current. Other calculated parameters include Schottky barrier height, ideality factor and reverse saturation current. Schottky barrier height has been decreased whereas reverse saturation current shows an increase in the value post irradiation with improvement in the ideality factor. Transfer length measurement (TLM) characterization shows an improvement in the contact resistance. Finally, diodes with larger area have more variation in the calculated parameters due to the induced local heating effect.

  19. Acid-base behavior of the gaspeite (NiCO3(s)) surface in NaCl solutions.

    PubMed

    Villegas-Jiménez, Adrián; Mucci, Alfonso; Pokrovsky, Oleg S; Schott, Jacques

    2010-08-03

    Gaspeite is a low reactivity, rhombohedral carbonate mineral and a suitable surrogate to investigate the surface properties of other more ubiquitous carbonate minerals, such as calcite, in aqueous solutions. In this study, the acid-base properties of the gaspeite surface were investigated over a pH range of 5 to 10 in NaCl solutions (0.001, 0.01, and 0.1 M) at near ambient conditions (25 +/- 3 degrees C and 1 atm) by means of conventional acidimetric and alkalimetric titration techniques and microelectrophoresis. Over the entire experimental pH range, surface protonation and electrokinetic mobility are strongly affected by the background electrolyte, leading to a significant decrease of the pH of zero net proton charge (PZNPC) and the pH of isoelectric point (pH(iep)) at increasing NaCl concentrations. This challenges the conventional idea that carbonate mineral surfaces are chemically inert to background electrolyte ions. Multiple sets of surface complexation reactions (i.e., ionization and ion adsorption) were formulated within the framework of three electrostatic models (CCM, BSM, and TLM) and their ability to simulate proton adsorption and electrokinetic data was evaluated. A one-site, 3-pK, constant capacitance surface complexation model (SCM) reproduces the proton adsorption data at all ionic strengths and qualitatively predicts the electrokinetic behavior of gaspeite suspensions. Nevertheless, the strong ionic strength dependence exhibited by the optimized SCM parameters reveals that the influence of the background electrolyte on the surface reactivity of gaspeite is not fully accounted for by conventional electrostatic and surface complexation models and suggests that future refinements to the underlying theories are warranted.

  20. Effect of oxygenated perfluorocarbon on isolated islets during transportation.

    PubMed

    Terai, Sachio; Tsujimura, Toshiaki; Li, Shiri; Hori, Yuichi; Toyama, Hirochika; Shinzeki, Makoto; Matsumoto, Ippei; Kuroda, Yoshikazu; Ku, Yonson

    2010-08-01

    Previous studies demonstrated the efficacy of the two-layer method (TLM) using oxygenated perfluorochemicals (PFC) for pancreas preservation. The current study investigated the effect of oxygenated PFC on isolated islets during transportation. Purified rat islets were stored in an airtight conical tube for 24h in RPMI culture medium at 22 degrees C or University of Wisconsin solution (UW) at 4 degrees C, either with or without oxygenated PFC. After storage, the islets were assessed for in vitro viability by static incubation (SI), FDA/PI staining, and energy status (ATP, energy charge, and ADP/ATP ratio) and for in vivo viability by a transplantation study. UW at 4 degrees C and RPMI medium at 22 degrees C maintained islet quality almost equally in both in vitro and in vivo assessments. The ATP levels and energy status in the groups with PFC were significantly lower than those without PFC. The groups with PFC showed a significantly higher ADP/ATP ratio than those without PFC. In the transplantation study, blood glucose levels and AUC in the UW+PFC group were significantly higher than those in UW group. UW at 4 degrees C and RPMI medium at 22 degrees C maintained islet quality equally under the conditions for islet transportation. The addition of oxygenated PFC, while advantageous for pancreas preservation, is not useful for islet transportation. Copyright 2010 Elsevier Inc. All rights reserved.

  1. Study on the performance of 2.6 μm In0.83Ga0.17As detector with different etch gases

    NASA Astrophysics Data System (ADS)

    Li, Ping; Tang, Hengjing; Li, Tao; Li, Xue; Shao, Xiumei; Ma, Yingjie; Gong, Haimei

    2017-09-01

    In order to obtain a low-damage recipe in the ICP processing, ICP-induced damage using Cl2/CH4 etch gases in extended wavelength In0.83Ga0.17As detector materials was studied in this paper. The effect of ICP etching on In0.83Ga0.17As samples was characterized qualitatively by the photoluminescence (PL) technology. The etch damage of In0.83Ga0.17As samples was characterized quantitatively by the Transmission Line Model (TLM), current voltage (IV) measurement, signal and noise testing and the Fourier Transform Infrared Spectroscopy (FTIR) technologies. The results showed that the Cl2/CH4 etching processing could lead better detector performance than that Cl2/N2, such as a larger square resistance, a lower dark current, a lower noise voltage and a higher peak detectivity. The lower PL signal intensity and lower dark current could be attributed to the hydrogen decomposed by the CH4 etch gases in the plasma etching process. These hydrogen particles generated non-radiative recombination centers in inner materials to weaken the PL intensity and passivated dangling bond at the surface to reduce the dark current. The larger square resistance resulted from the lower etch damage. The lower dark current meant that the detectors have less dangling bonds and leakage channels.

  2. Full wafer size investigation of N+ and P+ co-implanted layers in 4H-SiC

    NASA Astrophysics Data System (ADS)

    Blanqué, S.; Lyonnet, J.; Pérez, R.; Terziyska, P.; Contreras, S.; Godignon, P.; Mestres, N.; Pascual, J.; Camassel, J.

    2005-03-01

    We report a full wafer size investigation of the homogeneity of electrical properties in the case of co-implanted nitrogen and phosphorus ions in 4H-SiC semi-insulating wafers. To match standard industrial requirements, implantation was done at room temperature. To achieve a detailed electrical knowledge, we worked on a 35 mm wafer on which 77 different reticules have been processed. Every reticule includes one Hall cross, one Van der Pauw test structure and different TLM patterns. Hall measurements have been made on all 77 different reticules, using an Accent HL5500 Hall System® from BioRad fitted with an home-made support to collect data from room temperature down to about 150 K. At room temperature, we find that the sheet carrier concentration is only 1/4 of the total implanted dose while the average mobility is 80.6 cm2/Vs. The standard deviation is, typically, 1.5 cm2/Vs.

  3. Back-gated Nb-doped MoS2 junctionless field-effect-transistors

    NASA Astrophysics Data System (ADS)

    Mirabelli, Gioele; Schmidt, Michael; Sheehan, Brendan; Cherkaoui, Karim; Monaghan, Scott; Povey, Ian; McCarthy, Melissa; Bell, Alan P.; Nagle, Roger; Crupi, Felice; Hurley, Paul K.; Duffy, Ray

    2016-02-01

    Electrical measurements were carried out to measure the performance and evaluate the characteristics of MoS2 flakes doped with Niobium (Nb). The flakes were obtained by mechanical exfoliation and transferred onto 85 nm thick SiO2 oxide and a highly doped Si handle wafer. Ti/Au (5/45 nm) deposited on top of the flake allowed the realization of a back-gate structure, which was analyzed structurally through Scanning Electron Microscopy (SEM) and Transmission Electron Microscopy (TEM). To best of our knowledge this is the first cross-sectional TEM study of exfoliated Nb-doped MoS2 flakes. In fact to date TEM of transition-metal-dichalcogenide flakes is extremely rare in the literature, considering the recent body of work. The devices were then electrically characterized by temperature dependent Ids versus Vds and Ids versus Vbg curves. The temperature dependency of the device shows a semiconductor behavior and, the doping effect by Nb atoms introduces acceptors in the structure, with a p-type concentration 4.3 × 1019 cm-3 measured by Hall effect. The p-type doping is confirmed by all the electrical measurements, making the structure a junctionless transistor. In addition, other parameters regarding the contact resistance between the top metal and MoS2 are extracted thanks to a simple Transfer Length Method (TLM) structure, showing a promising contact resistivity of 1.05 × 10-7 Ω/cm2 and a sheet resistance of 2.36 × 102 Ω/sq.

  4. Proposal for a Specific Aerobic Test for Football Players: The “Footeval”

    PubMed Central

    Manouvrier, Christophe; Cassirame, Johan; Ahmaidi, Saïd

    2016-01-01

    The aim of this study was to evaluate the reproducibility and validity of the “Footeval” test, which evaluates football players’ aerobic level in conditions close to those of football practice (intermittent, including technical skills). Twenty-four highly trained subjects from an elite football academy (17.8 ± 1.4 years, 5 training sessions per week) performed two Footeval sessions in a period of 7 days. Physiological variables measured during these sessions (VO2max 58.1 ± 5.6 and 58.7 ± 6.2 ml·min-1·kg-1; RER 1.18 ± 0.06 and 1.19 ± 0.05; LaMax 11.0 ±1.4 and 10.8 ±1.1 µmol·L-1; HRmax 194 ± 6 and 190 ± 7 b·min-1; Final step 10.71 ± 1.2 and 10.83 ± 1.13 and the RPE = 10) highlighted maximal intensity and confirmed that players reached physiological exhaustion. Comparison of values measured in both sessions showed large to very large correlations (Final level; 0.92, VO2max; 0.79, HRmax; 0.88, LaMax; 0.87) and high ICC (Final level; 0.93, VO2max; 0.87, HRmax; 0.90, LaMax; 0.85) except for RER (r = 0.22, ICC = 0.21). In addition, all subjects performed a time limit (Tlm) exercise with intensity set at maximal aerobic specific speed + 1 km·h-1, in order to check the maximal value obtained during the Footeval test. Statistical analysis comparing VO2max, HRmax and RER from the Footeval and Tlm exercise proved that values from Footeval could be considered as maximal values (r for VO2max; 0.82, HRmax; 0.77 and ICC for VO2max; 0.92, HRmax; 0.91). This study showed that Footeval is a reproducible test that allows maximal aerobic specific speed to be obtained at physiological exhaustion. Moreover, we can also affirm that this test meets the physiological exhaustion criteria as defined in the literature (RER ≥ 1, 1; LaMax ≥ 8 µmol·L-1; HR = HRmax; no increase of VO2 despite the increase of speed; RPE =10). Key points “Footeval” is a new test for football that is able to evaluate aerobic capacity in football specific conditions. This study evaluates reproducibility and validity of the “Footeval” test in elite football players. PMID:27928213

  5. Study of the Ohmic Contact Mechanism of Oxidized Ni/Au Contact to p-GaN

    NASA Astrophysics Data System (ADS)

    Roesler, Erika; Chengyu, Hu; Zhang, Guoyi

    2004-03-01

    In the semiconductor industry, GaN is important for blue Laser Diodes (LDs) and Light Emitting Diodes (LEDs). In order to maximize efficiency for optoelectronic devices that utilize GaN products, a low contact resistance and an ohmic contact are needed. Previously, the contact resistance has been found to be as low as 10-4 Ohms cm^2. The aim of this research project was to investigate the influence of different annealing conditions for the contact resistance; analyze the microstructure of the electrodes; find the relationship between the microstructure, annealing conditions, and contact resistance; and then explain the mechanism. The sample was grown in a MOCVD system and had a mesa structure. It was activated with Mg-H 800 C for 20 minutes to become a p-type GaN semiconductor. The sample underwent four different annealing conditions. The first condition varied the temperature in constant Oxygen ambient; the second varied the temperature in air; the third varied the percentage of Oxygen with Nitrogen in constant temperature; and the fourth varied the time annealed under Oxygen ambient. The third condition has never previously been tested. We found definite minimums of the contact resistivity (using the TLM method) in the first condition and second conditions at 500 C. The third condition had the best results with a mix of 50% Oxygen and 50the fourth condition had the best results at 5 minutes. Once the effects of the microstructure are analyzed for the sample at each condition, a better understanding of the physical mechanisms to yield the contact resistance will be known.

  6. Chemistry and superconductivity in thallium-based cuprates. Technical report No. 56, 1 June 1989-30 May 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenblatt, M.; Li, S.; McMills, L.E.

    1990-06-01

    Following the discoveries of high temperature superconductivity in the rare-earth copper oxide systems at 40K bY Bednorz and Muller in 1986 and at 90K by other researchers in 1987, Sheng and Hermann, in 1988, discovered superconductivity in the thallium-alkaline-earth copper oxide systems with critical temperatures as high as 120K. All of the Tl-based compounds can be described by the general formula, TlmA2Can-1CuO2n+m+2, where m=1 or 2; n=5; A=Ba, Sr. For convenience, the names of these compounds are abbreviated as 2223 for TlBa2Ca2Cu3O10, where each number denotes the number of Tl, Ba(Sr), Ca and Cu ions per formula, respectively. The compoundsmore » with m=1 and m=2 are usually referred to as single and double T1-O layered compounds, respectively. The highest superconducting transition temperature known so far was found in Tl2BaCa2Cu3O10 at 125K.« less

  7. Drug resistance patterns in Mycobacterium leprae isolates from relapsed leprosy patients attending The Leprosy Mission (TLM) Hospitals in India.

    PubMed

    Lavania, Mallika; Jadhav, Rupendra S; Chaitanya, Vedithi Sundeep; Turankar, Ravindra; Selvasekhar, Abraham; Das, Loretta; Darlong, Famkima; Hambroom, Ujjwal K; Kumar, Sandip; Sengupta, Utpal

    2014-09-01

    Implementation of multidrug therapy (MDT) in leprosy control programmes has significantly reduced the global prevalence of the disease in the last two decades. After many years of use of MDT, it is expected that drug resistance in Mycobacterium leprae may emerge. This is a major concern, especially during the stage of elimination. In the present study, slit-skin smears were collected from 140 leprosy relapse cases from different Leprosy Mission hospitals across India. DNA extracted from 111 (79%) of these samples was analysed for the genes associated with drug resistance in M. leprae. More than 90% of the patients relapsed as multibacillary (MB) cases. In our study, four (3.6%) of the DNA samples analysed showed mutations associated with rifampicin resistance. We also observed that mutations associated with resistance to dapsone and ofloxacin were observed in 9 (8.1%) of the DNA samples each; two samples had both dapsone and ofloxacin resistance. Further surveillance and appropriate interventions are needed to ensure the continued success of chemotherapy for leprosy.

  8. Optimization of chemical structure of Schottky-type selection diode for crossbar resistive memory.

    PubMed

    Kim, Gun Hwan; Lee, Jong Ho; Jeon, Woojin; Song, Seul Ji; Seok, Jun Yeong; Yoon, Jung Ho; Yoon, Kyung Jean; Park, Tae Joo; Hwang, Cheol Seong

    2012-10-24

    The electrical performances of Pt/TiO(2)/Ti/Pt stacked Schottky-type diode (SD) was systematically examined, and this performance is dependent on the chemical structures of the each layer and their interfaces. The Ti layers containing a tolerable amount of oxygen showed metallic electrical conduction characteristics, which was confirmed by sheet resistance measurement with elevating the temperature, transmission line measurement (TLM), and Auger electron spectroscopy (AES) analysis. However, the chemical structure of SD stack and resulting electrical properties were crucially affected by the dissolved oxygen concentration in the Ti layers. The lower oxidation potential of the Ti layer with initially higher oxygen concentration suppressed the oxygen deficiency of the overlying TiO(2) layer induced by consumption of the oxygen from TiO(2) layer. This structure results in the lower reverse current of SDs without significant degradation of forward-state current. Conductive atomic force microscopy (CAFM) analysis showed the current conduction through the local conduction paths in the presented SDs, which guarantees a sufficient forward-current density as a selection device for highly integrated crossbar array resistive memory.

  9. Development of an Extreme High Temperature n-type Ohmic Contact to Silicon Carbide

    NASA Technical Reports Server (NTRS)

    Evans, Laura J.; Okojie, Robert S.; Lukco, Dorothy

    2011-01-01

    We report on the initial demonstration of a tungsten-nickel (75:25 at. %) ohmic contact to silicon carbide (SiC) that performed for up to fifteen hours of heat treatment in argon at 1000 C. The transfer length method (TLM) test structure was used to evaluate the contacts. Samples showed consistent ohmic behavior with specific contact resistance values averaging 5 x 10-4 -cm2. The development of this contact metallization should allow silicon carbide devices to operate more reliably at the present maximum operating temperature of 600 C while potentially extending operations to 1000 C. Introduction Silicon Carbide (SiC) is widely recognized as one of the materials of choice for high temperature, harsh environment sensors and electronics due to its ability to survive and continue normal operation in such environments [1]. Sensors and electronics in SiC have been developed that are capable of operating at temperatures of 600 oC. However operating these devices at the upper reliability temperature threshold increases the potential for early degradation. Therefore, it is important to raise the reliability temperature ceiling higher, which would assure increased device reliability when operated at nominal temperature. There are also instances that require devices to operate and survive for prolonged periods of time above 600 oC [2, 3]. This is specifically needed in the area of hypersonic flight where robust sensors are needed to monitor vehicle performance at temperature greater than 1000 C, as well as for use in the thermomechanical characterization of high temperature materials (e.g. ceramic matrix composites). While SiC alone can withstand these temperatures, a major challenge is to develop reliable electrical contacts to the device itself in order to facilitate signal extraction

  10. Effects of rapid thermal annealing on the contact of tungsten/p-diamond

    NASA Astrophysics Data System (ADS)

    Zhao, D.; Li, F. N.; Liu, Z. C.; Chen, X. D.; Wang, Y. F.; Shao, G. Q.; Zhu, T. F.; Zhang, M. H.; Zhang, J. W.; Wang, J. J.; Wang, W.; Wang, H. X.

    2018-06-01

    The electrical properties, surface morphology and interface characteristics of W/p-diamond contact before and after annealing have been investigated. It is shown that the as-fabricated W/p-diamond contact exhibited non-linear behavior. After annealing at a temperature higher than 400 °C, the W/p-diamond contact showed ohmic behaviour. The specific contact resistance of W/p-diamond was 8.2 × 10-4 Ω·cm2 after annealing at 500 °C for 3 min in a N2 ambient, which was extracted from fitting the I-V relationship of TLM. It is noted that the RMS roughness increases with the annealing temperature increasing, which could be ascribed to the formation of WOX by the reaction of W and oxygen at high temperature. The XPS measurement showed that the barrier height of the W/p-diamond is 0.45 ± 0.12 eV after annealing at 500 °C. Furthermore, the formation of defects at the W/p-diamond interface, probably created by the formation of tungsten carbide during rapid thermal annealing, should be responsible for the ohmic formation of W/p-diamond after annealing at high temperature.

  11. Multisensor interoperability for persistent surveillance and FOB protection with multiple technologies during the TNT exercise at Camp Roberts, California

    NASA Astrophysics Data System (ADS)

    Murarka, Naveen; Chambers, Jon

    2012-06-01

    Multiple sensors, providing actionable intelligence to the war fighter, often have difficulty interoperating with each other. Northrop Grumman (NG) is dedicated to solving these problems and providing complete solutions for persistent surveillance. In August, 2011, NG was invited to participate in the Tactical Network Topology (TNT) Capabilities Based Experimentation at Camp Roberts, CA to demonstrate integrated system capabilities providing Forward Operating Base (FOB) protection. This experiment was an opportunity to leverage previous efforts from NG's Rotorcraft Avionics Innovation Laboratory (RAIL) to integrate five prime systems with widely different capabilities. The five systems included a Hostile Fire and Missile Warning Sensor System, SCORPION II Unattended Ground Sensor system, Smart Integrated Vehicle Area Network (SiVAN), STARLite Synthetic Aperture Radar (SAR)/Ground Moving Target Indications (GMTI) radar system, and a vehicle with Target Location Module (TLM) and Laser Designation Module (LDM). These systems were integrated with each other and a Tactical Operations Center (TOC) equipped with RaptorX and Falconview providing a Common Operational Picture (COP) via Cursor on Target (CoT) messages. This paper will discuss this exercise, and the lessons learned, by integrating these five prime systems for persistent surveillance and FOB protection.

  12. Wearable and flexible thermoelectric generator with enhanced package

    NASA Astrophysics Data System (ADS)

    Francioso, L.; De Pascali, C.; Taurino, A.; Siciliano, P.; De Risi, A.

    2013-05-01

    Present work shows recent progresses in thin film-based flexible and wearable thermoelectric generator (TEG), finalized to support energy scavenging and local storage for low consumption electronics in Ambient Assisted Living (AAL) applications and buildings integration. The proposed TEG is able to recover energy from heat dispersed into the environment converting a thermal gradient to an effective electrical energy available to power ultra-low consumption devices. A low cost fabrication process based on planar thin-film technology was optimized to scale down the TEG dimensions to micrometer range. The prototype integrates 2778 thermocouples of sputtered Sb2Te3 and Bi2Te3 thin films (1 μm thick) on an area of 25 cm2. The electrical properties of thermoelectric materials were investigated by Van der Pauw measurements. Transfer Length Method (TLM) analysis was performed on three different multi-layer contact schemes in order to select the best solution to use for the definition of the contact pads realized on each section of the thermoelectric array configuration to allow electrical testing of single production areas. Kapton polyimide film was used as flexible substrate in order to add comfortable lightweight and better wearability to the device. The realized TEG is able to autonomously recover the thermal gradient useful to thermoelectric generation thanks to an appropriate package designed and optimized by a thermal analysis based on finite element method (FEM). The proposed package solution consists in coupling the module realized onto Kapton foil to a PDMS layer opportunely molded to thermally insulate TEG cold junctions and enhance the thermal gradient useful for the energy scavenging. Simulations results were compared to experimental tests performed by a thermal infrared camera, in order to evaluate the real performance of the designed package. First tests conducted on the realized TEG indicate that the prototype is able to recover about 5°C between hot and cold thermocouples junctions with a thermal difference of 17°C initially available between body skin and environment, generating about 2 V of open circuit output voltage.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, David L.; Nemeth, William; LaSalvia, Vincenzo

    Here, we present progress to develop low-cost interdigitated back contact solar cells with pc-Si/SiO 2/c-Si passivated contacts formed by plasma immersion ion implantation (PIII). PIII is a lower-cost implantation technique than traditional beam line implantation due to its simpler design, lower operating costs, and ability to run high doses (1E14-1E18 cm -2) at low ion energies (20 eV-10 keV). These benefits make PIII ideal for high throughput production of patterned passivated contacts, where high-dose, low-energy implantations are made into thin (20-200 nm) a-Si layers instead of into the wafer itself. For this work symmetric passivated contact test structures (~100 nmmore » thick) grown on n-Cz wafers with pH3 PIII doping gave implied open circuit voltage (iV oc) values of 730 mV with J o values of 2 fA/cm 2. Samples doped with B 2H 6 gave iV oc values of 690 mV and J o values of 24 fA/cm 2, outperforming BF 3 doping, which gave iV oc values in the 660-680 mV range. Samples were further characterized by SIMS, photoluminescence, TEM, EELS, and post-metallization TLM to reveal micro- and macro-scopic structural, chemical and electrical information.« less

  14. Indium tin oxide thin film strain gages for use at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Luo, Qing

    A robust ceramic thin film strain gage based on indium-tin-oxide (ITO) has been developed for static and dynamic strain measurements in advanced propulsion systems at temperatures up to 1400°C. These thin film sensors are ideally suited for in-situ strain measurement in harsh environments such as those encountered in the hot sections of gas turbine engines. A novel self-compensation scheme was developed using thin film platinum resistors placed in series with the active strain element (ITO) to minimize the thermal effect of strain or apparent strain. A mathematical model as well as design rules were developed for the self-compensated circuitry using this approach and close agreement between the model and actual static strain results has been achieved. High frequency dynamic strain tests were performed at temperatures up to 500°C and at frequencies up to 2000Hz to simulate conditions that would be encountered during engine vibration fatigue. The results indicated that the sensors could survive extreme test conditions while maintaining sensitivity. A reversible change in sign of the piezoresistive response from -G to +G was observed in the vicinity of 950°C, suggesting that the change carrier responsible for conduction in the ITO gage had been converted from a net "n-carrier" to a net "p-carrier" semiconductor. Electron spectroscopy for chemical analysis (ESCA) of the ITO films suggested they experienced an interfacial reaction with the Al2O3 substrate at 1400°C. It is likely that oxygen uptake from the substrate is responsible for stabilizing the ITO films to elevated temperatures through the interfacial reaction. Thermo gravimetric analysis of ITO films on alumina at elevated temperatures showed no sublimation of ITO films at temperature up to 1400°C. The surface morphology of ITO films heated to 800, 1200 and 1400°C were also evaluated by atomic force microscopy (AFM). A linear current-voltage (I--V) characteristic indicated that the contact interface between the ITO and platinum was ohmic in nature. The small specific contact resistivities were determined in the range of 10-3 to 10-1 Ocm2 from room temperature up to 1400°C using a transmission line model (TLM).

  15. Low-cost plasma immersion ion implantation doping for Interdigitated back passivated contact (IBPC) solar cells

    DOE PAGES

    Young, David L.; Nemeth, William; LaSalvia, Vincenzo; ...

    2016-06-01

    Here, we present progress to develop low-cost interdigitated back contact solar cells with pc-Si/SiO 2/c-Si passivated contacts formed by plasma immersion ion implantation (PIII). PIII is a lower-cost implantation technique than traditional beam line implantation due to its simpler design, lower operating costs, and ability to run high doses (1E14-1E18 cm -2) at low ion energies (20 eV-10 keV). These benefits make PIII ideal for high throughput production of patterned passivated contacts, where high-dose, low-energy implantations are made into thin (20-200 nm) a-Si layers instead of into the wafer itself. For this work symmetric passivated contact test structures (~100 nmmore » thick) grown on n-Cz wafers with pH3 PIII doping gave implied open circuit voltage (iV oc) values of 730 mV with J o values of 2 fA/cm 2. Samples doped with B 2H 6 gave iV oc values of 690 mV and J o values of 24 fA/cm 2, outperforming BF 3 doping, which gave iV oc values in the 660-680 mV range. Samples were further characterized by SIMS, photoluminescence, TEM, EELS, and post-metallization TLM to reveal micro- and macro-scopic structural, chemical and electrical information.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrokhzadeh, Abdolkarim; Modarresi-Alam, Ali Reza, E-mail: modaresi@chem.usb.ac.ir

    Poly [(±)-2-(sec-butyl) aniline]/silica-supported perchloric acid composites were synthesized by combination of poly[(±)-2-sec-butylaniline] base (PSBA) and the silica-supported perchloric acid (SSPA) as dopant solid acid in solid-state. The X-ray photoelectron spectroscopy (XPS) and CHNS results confirm nigraniline oxidation state and complete doping for composites (about 75%) and non-complete for the PSBA·HCl salt (about 49%). The conductivity of samples was (≈0.07 S/cm) in agreement with the percent of doping obtained of the XPS analysis. Also, contact resistance was determined by circular-TLM measurement. The morphology of samples by the scanning electron microscopy (SEM) and their coating were investigated by XPS, SEM-map and energy-dispersivemore » X-ray spectroscopy (EDX). The key benefits of this work are the preparation of conductive chiral composite with the delocalized polaron structure under green chemistry and solid-state condition, the improvement of the processability by inclusion of the 2-sec-butyl group and the use of dopant solid acid (SSPA) as dopant. - Highlights: • The solid-state synthesis of the novel chiral composites of poly[(±)-2-(sec-butyl)aniline] (PSBA) and silica-supported perchloric acid (SSPA). • It takes 120 h for complete deprotonation of PSBA.HCl salt. • Use of SSPA as dopant solid acid for the first time to attain the complete doping of PSBA. • The coating of silica surface with PSBA.« less

  17. Carrier Selective, Passivated Contacts for High Efficiency Silicon Solar Cells based on Transparent Conducting Oxides

    DOE PAGES

    Young, David L.; Nemeth, William; Grover, Sachit; ...

    2014-01-01

    We describe the design, fabrication and results of passivated contacts to n-type silicon utilizing thin SiO 2 and transparent conducting oxide layers. High temperature silicon dioxide is grown on both surfaces of an n-type wafer to a thickness <50 Å, followed by deposition of tin-doped indium oxide (ITO) and a patterned metal contacting layer. As deposited, the thin-film stack has a very high J0, contact, and a non-ohmic, high contact resistance. However, after a forming gas anneal, the passivation quality and the contact resistivity improve significantly. The contacts are characterized by measuring the recombination parameter of the contact (J0, contact)more » and the specific contact resistivity (ρ contact) using a TLM pattern. The best ITO/SiO 2 passivated contact in this study has J 0,contact = 92.5 fA/cm 2 and ρ contact = 11.5 mOhm-cm 2. These values are placed in context with other passivating contacts using an analysis that determines the ultimate efficiency and the optimal area fraction for contacts for a given set of (J0, contact, ρ contact) values. The ITO/SiO 2 contacts are found to have a higher J0, contact, but a similar ρ contact compared to the best reported passivated contacts.« less

  18. A new communication protocol family for a distributed spacecraft control system

    NASA Technical Reports Server (NTRS)

    Baldi, Andrea; Pace, Marco

    1994-01-01

    In this paper we describe the concepts behind and architecture of a communication protocol family, which was designed to fulfill the communication requirements of ESOC's new distributed spacecraft control system SCOS 2. A distributed spacecraft control system needs a data delivery subsystem to be used for telemetry (TLM) distribution, telecommand (TLC) dispatch and inter-application communication, characterized by the following properties: reliability, so that any operational workstation is guaranteed to receive the data it needs to accomplish its role; efficiency, so that the telemetry distribution, even for missions with high telemetry rates, does not cause a degradation of the overall control system performance; scalability, so that the network is not the bottleneck both in terms of bandwidth and reconfiguration; flexibility, so that it can be efficiently used in many different situations. The new protocol family which satisfies the above requirements is built on top of widely used communication protocols (UDP and TCP), provides reliable point-to-point and broadcast communication (UDP+) and is implemented in C++. Reliability is achieved using a retransmission mechanism based on a sequence numbering scheme. Such a scheme allows to have cost-effective performances compared to the traditional protocols, because retransmission is only triggered by applications which explicitly need reliability. This flexibility enables applications with different profiles to take advantage of the available protocols, so that the best rate between sped and reliability can be achieved case by case.

  19. Crystal Structure of the Zorbamycin-Binding Protein ZbmA, the Primary Self-Resistance Element in Streptomyces flavoviridis ATCC21892

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudolf, Jeffrey D.; Bigelow, Lance; Chang, Changsoo

    The bleomycins (BLMs), tallysomycins (TLMs), phleomycin, and zorbamycin (ZBM) are members of the BLM family of glycopeptide-derived antitumor antibiotics. The BLM-producing Streptomyces verticillus ATCC15003 and the TLM-producing Streptoalloteichus hindustanus E465-94 ATCC31158 both possess at least two self-resistance elements, an N-acetyltransferase and a binding protein. The N-acetyltransferase provides resistance by disrupting the metal-binding domain of the antibiotic that is required for activity, while the binding protein confers resistance by sequestering the metal-bound antibiotic and preventing drug activation via molecular oxygen. We recently established that the ZBM producer, Streptomyces flavoviridis ATCC21892, lacks the N-acetyltransferase resistance gene and that the ZBM-binding protein, ZbmA,more » is sufficient to confer resistance in the producing strain. To investigate the resistance mechanism attributed to ZbmA, we determined the crystal structures of apo and Cu(II)-ZBM-bound ZbmA at high resolutions of 1.90 and 1.65 angstrom, respectively. A comparison and contrast with other structurally characterized members of the BLM-binding protein family revealed key differences in the protein ligand binding environment that fine-tunes the ability of ZbmA to sequester metal-bound ZBM and supports drug sequestration as the primary resistance mechanism in the producing organisms of the BLM family of antitumor antibiotics.« less

  20. Process depending morphology and resulting physical properties of TPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frick, Achim, E-mail: achim.frick@hs-aalen.de; Spadaro, Marcel, E-mail: marcel.spadaro@hs-aalen.de

    2015-12-17

    Thermoplastic polyurethane (TPU) is a rubber like material with outstanding properties, e.g. for seal applications. TPU basically provides high strength, low frictional behavior and excellent wear resistance. Though, due to segmented structure of TPU, which is composed of hard segments (HSs) and soft segments (SSs), physical properties depend strongly on the morphological arrangement of the phase separated HSs at a certain ratio of HSs to SSs. It is obvious that the TPU deforms differently depending on its bulk morphology. Basically, the morphology can either consist of HSs segregated into small domains, which are well dispersed in the SS matrix ormore » of few strongly phase separated large size HS domains embedded in the SS matrix. The morphology development is hardly ruled by the melt processing conditions of the TPU. Depending on the morphology, TPU provides quite different physical properties with respect to strength, deformation behavior, thermal stability, creep resistance and tribological performance. The paper deals with the influence of important melt processing parameters, such as temperature, pressure and shear conditions, on the resulting physical properties tested by tensile and relaxation experiments. Furthermore the morphology is studied employing differential scanning calorimeter (DSC), transmission light microscopy (TLM), scanning electron beam microscopy (SEM) and transmission electron beam microscopy (TEM) investigations. Correlations between processing conditions and resulting TPU material properties are elaborated. Flow and shear simulations contribute to the understanding of thermal and flow induced morphology development.« less

  1. Surgery in the HPV era: the role of robotics and microsurgical techniques.

    PubMed

    Ridge, John A

    2014-01-01

    Retrospective studies suggested that head and neck cancers associated with human papilloma virus (HPV) are more frequently cured than those caused by substance use. The Eastern Cooperative Oncology Group (ECOG) subsequently confirmed the observation in a prospective trial. Most HPV-initiated cancers arise in the oropharynx. Survival differences between patients with cancers caused by HPV and those caused by alcohol and tobacco use persist despite modern treatment. The impression that treatment intensification has resulted in improved survivorship may well be attributable to an increasing proportion of patients with cancers caused by HPV infection. Unsatisfactory results for cancers attributable to substance use and encouraging improvements in tumor control for patients with HPV-initiated cancers have led to dissatisfaction with the current nonsurgical management paradigm. Ongoing advances in surgical techniques permit transoral resection of oropharyngeal cancers, thus limiting exposure-related morbidity and permitting ready recovery in terms of speech and swallowing. Transoral laser microsurgery (TLM) is increasingly employed and transoral robotic surgery (TORS) has dramatically popularized surgical treatment of oropharyngeal cancers. Resection affords the opportunity to increase local control at the primary site and surgical management of neck allows risk-based stratification of postoperative radiation therapy. Case series from several institutions show encouraging results. Transoral surgical resection is safe, can be undertaken with acceptable morbidity, and provides comparable locoregional control to that achieved with chemoradiation. Prospective trials for patients with HPV-initiated cancers, as well as those referable to substance use, are underway.

  2. Microstructure, electrical properties, and thermal stability of Au-based ohmic contacts to p-GaN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.L.; Davis, R.F.; Kim, M.J.

    1997-09-01

    The work described in this paper is part of a systematic study of ohmic contact strategies for GaN-based semiconductors. Au contacts exhibited ohmic behavior on p-GaN when annealed at high temperature. The specific contact resistivity ({rho}{sub c}) calculated from TLM measurements on Au/p-GaN contacts was 53{Omega}{center_dot}cm{sup 2} after annealing at 800{degree}C. Multilayer Au/Mg/Au/p-GaN contacts exhibited linear, ohmic current-voltage (I-V) behavior in the as-deposited condition with {rho}{sub c}=214{Omega}{center_dot}cm{sup 2}. The specific contact resistivity of the multilayer contact increased significantly after rapid thermal annealing (RTA) through 725{degree}C. Cross-sectional microstructural characterization of the Au/p-GaN contact system via high-resolution electron microscopy (HREM) revealed thatmore » interfacial secondary phase formation occurred during high-temperature treatments, which coincided with the improvement of contact performance. In the as-deposited multilayer Au/Mg/Au/p-GaN contact, the initial 32 nm Au layer was found to be continuous. However, Mg metal was found in direct contact with the GaN in many places in the sample after annealing at 725{degree}C for 15 s. The resultant increase in contact resistance is believed to be due to the barrier effect increased by the presence of the low work function Mg metal. {copyright} {ital 1997 Materials Research Society.}« less

  3. A Study of Contacts and Back-Surface Reflectors for 0.6eV Ga0.32In0.68As/InAs0.32P0.68 Thermophotovoltaic Monolithically Interconnected Modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X.; Duda, A.; Carapella, J. J.

    1998-12-23

    Thermophotovoltaic (TPV) systems have recently rekindled a high level of interest for a number of applications. In order to meet the requirement of low-temperature ({approx}1000 C) TPV systems, 0.6-eV Ga0.32In0.68As/InAs0.32P0.68 TPV monolithically interconnected modules (MIMs) have been developed at the National Renewable energy Laboratory (NREL)[1]. The successful fabrication of Ga0.32In0.68As/InAs0.32P0.68 MIMs depends on developing and optimizing of several key processes. Some results regarding the chemical vapor deposition (CVD)-SiO2 insulating layer, selective chemical etch via sidewall profiles, double-layer antireflection coatings, and metallization via interconnects have previously been given elsewhere [2]. In this paper, we report on the study of contacts andmore » back-surface reflectors. In the first part of this paper, Ti/Pd/Ag and Cr/Pd/Ag contact to n-InAs0.32P0.68and p-Ga0.32In0.68As are investigated. The transfer length method (TLM) was used for measuring of specific contact resistance Rc. The dependence of Rc on different doping levels and different pre-treatment of the two semiconductors will be reported. Also, the adhesion and the thermal stability of Ti/Pd/Ag and Cr/Pd/Ag contacts to n-InAs0.32P0.68and p-Ga0.32In0.68As will be presented. In the second part of this paper, we discuss an optimum back-surface reflector (BSR) that has been developed for 0.6-eV Ga0.32In0.68As/InAs0.32P0.68 TPV MIM devices. The optimum BSR consists of three layers: {approx}1300{angstrom} MgF2 (or {approx}1300{angstrom} CVD SiO2) dielectric layer, {approx}25{angstrom} Ti adhesion layer, and {approx}1500{angstrom} Au reflection layer. This optimum BSR has high reflectance, good adhesion, and excellent thermal stability.« less

  4. Strain effects in low-dimensional silicon MOS and AlGaN/GaN HEMT devices

    NASA Astrophysics Data System (ADS)

    Baykan, Mehmet Onur

    Strained silicon technology is a well established method to enhance sub-100nm MOSFET performance. With the scalability of process-induced strain, strained silicon channels have been used in every advanced CMOS technology since the 90nm node. At the 22nm node, due to the detrimental short channel effects, non-planar silicon CMOS has emerged as a viable solution to sustain transistor scaling without compromising the device performance. Therefore, it is necessary to conduct a physics based investigation of the effects of mechanical strain in silicon MOS device performance enhancement, as the transverse and longitudinal device dimensions scale down for future technology nodes. While silicon is widely used as the material basis for logic transistors, AlGaN/GaN HEMTs promise a superior device platform over silicon based power MOSFETs for high-frequency and high-power applications. In contrast to the mature Si crystal growth technology, the abundance of defects in the GaN material system creates obstacles for the realization of a reliable AlGaN/GaN HEMT device technology. Due to the high levels of internal mechanical strain present in AlGaN/GaN HEMTs, it is of utmost importance to understand the impact of mechanical stress on AlGaN/GaN trap generation. First, we have investigated the underlying physics of the comparable electron mobility observed in (100) and (110) sidewall silicon double-gate FinFETs, which is different from the observed planar (100) and (110) electron mobility. By conducting a systematic experimental study, it is shown that the undoped body, metal gate induced stress, and volume-inversion effects do not explain the comparable electron mobility. Using a self-consistent double-gate FinFET simulator, we have showed that for (110) FinFETs, an increased population of electrons is obtained for the Delta2 valley due to the heavy nonparabolic confinement mass, leading to a comparable average electron transport effective mass for both orientations. The width dependent strain response of tri-gate p-type FinFETs are experimentally extracted using a 4-point bending jig. It is found that the low-field piezoresistance coefficient of p-type FinFETs can be modeled by using a weighted conductance average of the top and sidewall bulk piezoresistance coefficients. Next, the strain enhancement of p-type ballistic silicon nanowire MOSFETs is studied using sp3d 5s* basis nearest-neighbor tight-binding simulations coupled with a semiclassical top-of-the-barrier transport model. Size and orientation dependent strain enhancement of ballistic hole transport is explained by the strain-induced modification of the 1D nanowire valence band density-of-states. Further insights are provided for future p-type high-performance silicon nanowire logic devices. A physics based investigation is conducted to understand the strain effects on surface roughness limited electron mobility in silicon inversion layers. Based on the evidence from electrical and material characterization, a strain-induced surface morphology change is hypothesized. To model the observed electrical characteristics, we have employed a self-consistent MOSFET mobility simulator coupled with an ad hoc strain-induced roughness modification. The strain induced surface morphology change is found to be consistent among electrical and materials characterization, as well as transport simulations. In order to bridge the gap between the drift-diffusion based models for long-channel devices and the quasi-ballistic models for nanoscale channels, a unified carrier transport model is developed using an updated one-flux theory. Including the high-field and carrier confinement effects, a surface-potential based analytical transmission expression is obtained for the entire MOSFET operation range. With the new channel transmission equation and average carrier drift velocity, a new expression for channel ballisticity is defined. Impact of mechanical strain on carrier transport for both nMOSFETs and pMOSFETs in both linear and saturation regimes is explained using the new channel transmission definitions. To understand the impact of mechanical strain on AlGaN/GaN HEMT trap generation, we have devised an experimental method to obtain the photon flux-normalized relative areal trap density distribution using photoionization spectroscopy technique. The details of the trap extraction method and the experimental setup are given. Using this setup, the trap characteristics are extracted for both ungated transmission line module (TLM) and gated HEMT devices from both Si and SiC substrates. The changes in the device trap characteristics are emphasized before and after electrical stressing. It is found through the step-voltage stressing of the AlGaN/GaN HEMT gate stack that the device degradation is due to the near bandgap trap generation, which are shown to be related to the structural defects in GaN.

  5. Response of surface and groundwater on meteorological drought in Topla River catchment, Slovakia

    NASA Astrophysics Data System (ADS)

    Fendekova, Miriam; Fendek, Marian; Vrablikova, Dana; Blaskovicova, Lotta; Slivova, Valeria; Horvat, Oliver

    2016-04-01

    Continuously increasing number of drought studies published in scientific journals reflects the attention of the scientific community paid to drought. The fundamental works among many others were published by Yevjevich (1967), Zelenhasic and Salvai (1987), later by Tallaksen and van Lanen Eds. (2004). The aim of the paper was to analyze the response of surface and groundwater to meteorological drought occurrence in the upper and middle part of the Topla River Basin, Slovakia. This catchment belongs to catchments with unfavourable hydrogeological conditions, being built of rocks with quite low permeability. The basin is located in the north-eastern part of Slovakia covering the area of 1050.05 km2. The response was analyzed using precipitation data from the Bardejov station (long-term annual average of 662 mm in 1981 - 2012) and discharge data from two gauging stations - Bardejov and Hanusovce nad Toplou. Data on groundwater head from eight observation wells, located in the catchment, were also used, covering the same observation period. Meteorological drought was estimated using characterisation of the year humidity and SPI index. Hydrological drought was evaluated using the threshold level method and method of sequent peak algorithm, both with the fixed and also variable thresholds. The centroid method of the cluster analysis with the squared Euclidean distance was used for clustering data according to occurrence of drought periods, lasting for 100 days and more. Results of the SPI index showed very good applicability for drought periods identification in the basin. The most pronounced dry periods occurred in 1982 - 1983, 1984, 1998 and 2012 being classified as moderately dry, and also in 1993 - 1994, 2003 - 2004 and 2007 evolving from moderately to severely dry years. Short-term drought prevailed in discharges, only three periods of drought longer than 100 days occurred during the evaluated period in 1986 - 1987, 1997 and 2003 - 2004. Discharge drought in the upper gauging station in Bardejov lasts usually longer than in Hanusovce nad Toplou station being located downstream. Higher number of short-term droughts was estimated for groundwater head in one monitoring well with the smallest depth of groundwater head below the surface. In this case, the influence of evapotranspiration could be the reason. More long-term droughts were estimated by TLM method for groundwater heads in other seven monitoring wells. Those droughts lasted for tens of weeks since summer until the spring of the next year. No regularity in temporal groundwater head drought propagation downstream the Topla River was discovered. However, results of the cluster analysis showed some common features of long-term drought periods (more than 100 days) occurrence for two groups of wells. Different hydrogeological conditions in two evaluated wells were also reflected in drought periods number and severity. The research was financially supported by APVV-0089-12 project (principal investigator Miriam Fendekova).

  6. Laser assisted zona hatching does not lead to immediate impairment in human embryo quality and metabolism.

    PubMed

    Uppangala, Shubhashree; D'Souza, Fiona; Pudakalakatti, Shivanand; Atreya, Hanudatta S; Raval, Keyur; Kalthur, Guruprasad; Adiga, Satish Kumar

    2016-12-01

    Laser assisted zona hatching (LAH) is a routinely used therapeutic intervention in assisted reproductive technology for patients with poor prognosis. However, results are not conclusive in demonstrating the benefits of zona hatching in improving the pregnancy rate. Recent observations on LAH induced genetic instability in animal embryos prompted us to look into the effects of laser assisted zona hatching on the human preimplantation embryo quality and metabolic uptake using high resolution nuclear magnetic resonance (NMR) technology. This experimental prospective study included fifty embryos from twenty-five patients undergoing intra cytoplasmic sperm injection. Embryo quality assessment followed by profiling of spent media for the non-invasive evaluation of metabolites was performed using NMR spectroscopy 24 hours after laser treatment and compared with that of non-treated sibling embryos. Both cell number and embryo quality on day 3 of development did not vary significantly between the two groups at 24 hours post laser treatment interval. Time lapse monitoring of the embryos for 24 hours did not reveal blastomere fragmentation adjacent to the point of laser treatment. Similarly, principal component analysis of metabolites did not demonstrate any variation across the groups. These results suggest that laser assisted zona hatching does not affect human preimplantation embryo morphology and metabolism at least until 24 hours post laser assisted zona hatching. However, studies are required to elucidate laser induced metabolic and developmental changes at extended time periods. AH: assisted hatching; ART: assisted reproductive technology; DNA: deoxy-ribo nucleic acid; LAH: laser assisted hatching; MHz: megahertz; NMR: nuclear magnetic resonance; PCA: principal component analysis; PGD: preimplantation genetic diagnosis; TLM: time lapse monitoring.

  7. Genotyping of Mycobacterium leprae strains from a region of high endemic leprosy prevalence in India.

    PubMed

    Lavania, Mallika; Jadhav, Rupendra; Turankar, Ravindra P; Singh, Itu; Nigam, Astha; Sengupta, U

    2015-12-01

    Leprosy is still a major health problem in India which has the highest number of cases. Multiple locus variable number of tandem repeat analysis (MLVA) and single nucleotide polymorphism (SNP) have been proposed as tools of strain typing for tracking the transmission of leprosy. However, empirical data for a defined population from scale and duration were lacking for studying the transmission chain of leprosy. Seventy slit skin scrapings were collected from Purulia (West Bengal), Miraj (Maharashtra), Shahdara (Delhi), and Naini (UP) hospitals of The Leprosy Mission (TLM). SNP subtyping and MLVA on 10 VNTR loci were applied for the strain typing of Mycobacterium leprae. Along with the strain typing conventional epidemiological investigation was also performed to trace the transmission chain. In addition, phylogenetic analysis was done on variable number of tandem repeat (VNTR) data sets using sequence type analysis and recombinational tests (START) software. START software performs analyses to aid in the investigation of bacterial population structure using multilocus sequence data. These analyses include data summary, lineage assignment, and tests for recombination and selection. Diversity was observed in the cross-sectional survey of isolates obtained from 70 patients. Similarity in fingerprinting profiles observed in specimens of cases from the same family or neighborhood locations indicated a possible common source of infection. The data suggest that these VNTRs including subtyping of SNPs can be used to study the sources and transmission chain in leprosy, which could be very important in monitoring of the disease dynamics in high endemic foci. The present study strongly indicates that multi-case families might constitute epidemic foci and the main source of M. leprae in villages, causing the predominant strain or cluster infection leading to the spread of leprosy in the community. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Planetary Protection Bioburden Analysis Program

    NASA Technical Reports Server (NTRS)

    Beaudet, Robert A.

    2013-01-01

    This program is a Microsoft Access program that performed statistical analysis of the colony counts from assays performed on the Mars Science Laboratory (MSL) spacecraft to determine the bioburden density, 3-sigma biodensity, and the total bioburdens required for the MSL prelaunch reports. It also contains numerous tools that report the data in various ways to simplify the reports required. The program performs all the calculations directly in the MS Access program. Prior to this development, the data was exported to large Excel files that had to be cut and pasted to provide the desired results. The program contains a main menu and a number of submenus. Analyses can be performed by using either all the assays, or only the accountable assays that will be used in the final analysis. There are three options on the first menu: either calculate using (1) the old MER (Mars Exploration Rover) statistics, (2) the MSL statistics for all the assays, or This software implements penetration limit equations for common micrometeoroid and orbital debris (MMOD) shield configurations, windows, and thermal protection systems. Allowable MMOD risk is formulated in terms of the probability of penetration (PNP) of the spacecraft pressure hull. For calculating the risk, spacecraft geometry models, mission profiles, debris environment models, and penetration limit equations for installed shielding configurations are required. Risk assessment software such as NASA's BUMPERII is used to calculate mission PNP; however, they are unsuitable for use in shield design and preliminary analysis studies. The software defines a single equation for the design and performance evaluation of common MMOD shielding configurations, windows, and thermal protection systems, along with a description of their validity range and guidelines for their application. Recommendations are based on preliminary reviews of fundamental assumptions, and accuracy in predicting experimental impact test results. The software is programmed in Visual Basic for Applications for installation as a simple add-in for Microsoft Excel. The user is directed to a graphical user interface (GUI) that requires user inputs and provides solutions directly in Microsoft Excel workbooks. This work was done by Shannon Ryan of the USRA Lunar and Planetary Institute for Johnson Space Center. Further information is contained in a TSP (see page 1). MSC- 24582-1 Micrometeoroid and Orbital Debris (MMOD) Shield Ballistic Limit Analysis Program Lyndon B. Johnson Space Center, Houston, Texas Commercially, because it is so generic, Enigma can be used for almost any project that requires engineering visualization, model building, or animation. Models in Enigma can be exported to many other formats for use in other applications as well. Educationally, Enigma is being used to allow university students to visualize robotic algorithms in a simulation mode before using them with actual hardware. This work was done by David Shores and Sharon P. Goza of Johnson Space Center; Cheyenne McKeegan, Rick Easley, Janet Way, and Shonn Everett of MEI Technologies; Mark Manning of PTI; and Mark Guerra, Ray Kraesig, and William Leu of Tietronix Software, Inc. For further information, contact the JSC Innovation Partnerships Office at (281) 483-3809. MSC-24211-1 Spitzer Telemetry Processing System NASA's Jet Propulsion Laboratory, Pasadena, California The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs. This work was done by Alice Stanboli, Elmain M. Martinez, and James M. McAuley of Caltech for NASA's Jet Propulsion Laboratory. For more information, contact iaoffice @jpl.nasa.gov. This software is available for commercial licensing. Please contact Dan Broderick at Daniel.F. Broderick@jpl.nasa.gov. Refer to NPO-47803. NASA Tech Briefs, September 2013 29 This rapid response computer program predicts Orbiter Wing Leading Edge (WLE) damage caused by ice or foam impact during a Space Shuttle launch (Program "IMPACT2"). The program was developed after the Columbia accident in order to assess quickly WLE damage due to ice, foam, or metal impact (if any) during a Shuttle launch. IMPACT2 simulates an impact event in a few minutes for foam impactors, and in seconds for ice and metal impactors. The damage criterion is derived from results obtained from one sophisticated commercial program, which requires hours to carry out simulations of the same impact events. The program was designed to run much faster than the commercial program with prediction of projectile threshold velocities within 10 to 15% of commercial-program values. The mathematical model involves coupling of Orbiter wing normal modes of vibration to nonlinear or linear springmass models. IMPACT2 solves nonlinear or linear impact problems using classical normal modes of vibration of a target, and nonlinear/ linear time-domain equations for the projectile. Impact loads and stresses developed in the target are computed as functions of time. This model is novel because of its speed of execution. A typical model of foam, or other projectile characterized by material nonlinearities, impacting an RCC panel is executed in minutes instead of hours needed by the commercial programs. Target damage due to impact can be assessed quickly, provided that target vibration modes and allowable stress are known. This work was done by Robert Clark, Jr., Paul Cotter, and Constantine Michalopoulos of The Boeing Company for Johnson Space Center. For further information, contact the JSC Innovation Partnerships Office at (281) 483-3809. MSC-24988-1 Wing Leading Edge RCC Rapid Response Damage Prediction Tool (IMPACT2) Lyndon B. Johnson Space Center, Houston, Texas (3) the MSL statistics for only the accountable assays. Other options on the main menu include a data editing form and utility programs that produce various reports requested by the microbiologists and the project, and tools to generate the groupings for the final analyses. The analyses can be carried out in three ways: Each assay can be treated separately, the assays can be collectively treated for the whole zone as a group, or the assays can be collected in groups designated by the JPL Planetary Protection Manager. The latter approach was used to generate the final report because assays on the same equipment or similar equipment can be assumed to have been exposed to the same environment and cleaning. Thus, the statistics are improved by having a larger population, thereby reducing the standard deviation by the square root of N. For each method mentioned above, three reports are available. The first is a detailed report including all the data. This version was very useful in verifying the calculations. The second is a brief report that is similar to the full detailed report, but does not print out the data. The third is a grand total and summary report in which each assay requires only one line. For the first and second reports, most of the calculations are performed in the report section itself. For the third, all the calculations are performed directly in the query bound to the report. All the numeric al results were verified by comparing them with Excel templates, then exporting the data from the Planetary Protection Analysis program to Excel.

  9. Lead-germanium ohmic contact on to gallium arsenide formed by the solid phase epitaxy of germanium: A microstructure study

    NASA Astrophysics Data System (ADS)

    Radulescu, Fabian

    2000-12-01

    Driven by the remarkable growth in the telecommunication market, the demand for more complex GaAs circuitry continued to increase in the last decade. As a result, the GaAs industry is faced with new challenges in its efforts to fabricate devices with smaller dimensions that would permit higher integration levels. One of the limiting factors is the ohmic contact metallurgy of the metal semiconductor field effect transistor (MESFET), which, during annealing, induces a high degree of lateral diffusion into the substrate. Because of its limited reaction with the substrate, the Pd-Ge contact seems to be the most promising candidate to be used in the next generation of MESFET's. The Pd-Ge system belongs to a new class of ohmic contacts to compound semiconductors, part of an alloying strategy developed only recently, which relies on solid phase epitaxy (SPE) and solid phase regrowth to "un-pin" the Fermi level at the surface of the compound semiconductor. However, implementing this alloy into an integrated process flow proved to be difficult due to our incomplete understanding of the microstructure evolution during annealing and its implications on the electrical properties of the contact. The microstructure evolution and the corresponding solid state reactions that take place during annealing of the Pd-Ge thin films on to GaAs were studied in connection with their effects on the electrical properties of the ohmic contact. The phase transformations sequence, transition temperatures and activation energies were determined by combining differential scanning calorimetry (DSC) for thermal analysis with transmission electron microscopy (TEM) for microstructure identification. In-situ TEM annealing experiments on the Pd/Ge/Pd/GaAs ohmic contact system have permitted real time determination of the evolution of contact microstructure. The kinetics of the solid state reactions, which occur during ohmic contact formation, were determined by measuring the grain growth rates associated with each phase from the videotape recordings. With the exception of the Pd-GaAs interactions, it was found that four phase transformations occur during annealing of the Pd:Ge thin films on top of GaAs. The microstructural information was correlated with specific ohmic contact resistivity measurements performed in accordance with the transmission line method (TLM) and these results demonstrated that the Ge SPE growth on top of GaAs renders the optimal electrical properties for the contact. By using the focused ion beam (FIB) method to produce microcantilever beams, the residual stress present in the thin film system was studied in connection with the microstructure. Although, the PdGe/epi-Ge/GaAs seemed to be the optimal microstructural configuration, the presence of PdGe at the interface with GaAs did not damage the contact resistivity significantly. These results made it difficult to establish a charge transport mechanism across the interface but they explained the wide processing window associated with this contact.

  10. Multi-scale structural and chemical analysis of sugarcane bagasse in the process of sequential acid–base pretreatment and ethanol production by Scheffersomyces shehatae and Saccharomyces cerevisiae

    PubMed Central

    2014-01-01

    Background Heavy usage of gasoline, burgeoning fuel prices, and environmental issues have paved the way for the exploration of cellulosic ethanol. Cellulosic ethanol production technologies are emerging and require continued technological advancements. One of the most challenging issues is the pretreatment of lignocellulosic biomass for the desired sugars yields after enzymatic hydrolysis. We hypothesized that consecutive dilute sulfuric acid-dilute sodium hydroxide pretreatment would overcome the native recalcitrance of sugarcane bagasse (SB) by enhancing cellulase accessibility of the embedded cellulosic microfibrils. Results SB hemicellulosic hydrolysate after concentration by vacuum evaporation and detoxification showed 30.89 g/l xylose along with other products (0.32 g/l glucose, 2.31 g/l arabinose, and 1.26 g/l acetic acid). The recovered cellulignin was subsequently delignified by sodium hydroxide mediated pretreatment. The acid–base pretreated material released 48.50 g/l total reducing sugars (0.91 g sugars/g cellulose amount in SB) after enzymatic hydrolysis. Ultra-structural mapping of acid–base pretreated and enzyme hydrolyzed SB by microscopic analysis (scanning electron microcopy (SEM), transmitted light microscopy (TLM), and spectroscopic analysis (X-ray diffraction (XRD), Fourier transform infrared (FTIR) spectroscopy, Fourier transform near-infrared (FT-NIR) spectroscopy, and nuclear magnetic resonance (NMR) spectroscopy) elucidated the molecular changes in hemicellulose, cellulose, and lignin components of bagasse. The detoxified hemicellulosic hydrolysate was fermented by Scheffersomyces shehatae (syn. Candida shehatae UFMG HM 52.2) and resulted in 9.11 g/l ethanol production (yield 0.38 g/g) after 48 hours of fermentation. Enzymatic hydrolysate when fermented by Saccharomyces cerevisiae 174 revealed 8.13 g/l ethanol (yield 0.22 g/g) after 72 hours of fermentation. Conclusions Multi-scale structural studies of SB after sequential acid–base pretreatment and enzymatic hydrolysis showed marked changes in hemicellulose and lignin removal at molecular level. The cellulosic material showed high saccharification efficiency after enzymatic hydrolysis. Hemicellulosic and cellulosic hydrolysates revealed moderate ethanol production by S. shehatae and S. cerevisiae under batch fermentation conditions. PMID:24739736

  11. Internalization of titanium dioxide nanoparticles by glial cells is given at short times and is mainly mediated by actin reorganization-dependent endocytosis.

    PubMed

    Huerta-García, Elizabeth; Márquez-Ramírez, Sandra Gissela; Ramos-Godinez, María Del Pilar; López-Saavedra, Alejandro; Herrera, Luis Alonso; Parra, Alberto; Alfaro-Moreno, Ernesto; Gómez, Erika Olivia; López-Marure, Rebeca

    2015-12-01

    Many nanoparticles (NPs) have toxic effects on multiple cell lines. This toxicity is assumed to be related to their accumulation within cells. However, the process of internalization of NPs has not yet been fully characterized. In this study, the cellular uptake, accumulation, and localization of titanium dioxide nanoparticles (TiO2 NPs) in rat (C6) and human (U373) glial cells were analyzed using time-lapse microscopy (TLM) and transmission electron microscopy (TEM). Cytochalasin D (Cyt-D) was used to evaluate whether the internalization process depends of actin reorganization. To determine whether the NP uptake is mediated by phagocytosis or macropinocytosis, nitroblue tetrazolium (NBT) reduction was measured and the 5-(N-ethyl-N-isopropyl)-amiloride was used. Expression of proteins involved with endocytosis and exocytosis such as caveolin-1 (Cav-1) and cysteine string proteins (CSPs) was also determined using flow cytometry. TiO2 NPs were taken up by both cell types, were bound to cellular membranes and were internalized at very short times after exposure (C6, 30 min; U373, 2h). During the uptake process, the formation of pseudopodia and intracellular vesicles was observed, indicating that this process was mediated by endocytosis. No specific localization of TiO2 NPs into particular organelles was found: in contrast, they were primarily localized into large vesicles in the cytoplasm. Internalization of TiO2 NPs was strongly inhibited by Cyt-D in both cells and by amiloride in U373 cells; besides, the observed endocytosis was not associated with NBT reduction in either cell type, indicating that macropinocytosis is the main process of internalization in U373 cells. In addition, increases in the expression of Cav-1 protein and CSPs were observed. In conclusion, glial cells are able to internalize TiO2 NPs by a constitutive endocytic mechanism which may be associated with their strong cytotoxic effect in these cells; therefore, TiO2 NPs internalization and their accumulation in brain cells could be dangerous to human health. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Growth of non-polar and semi-polar gallium nitride with plasma assisted molecular beam epitaxy: Relatonships between film microstructure, reciprocal lattice and transport properties

    NASA Astrophysics Data System (ADS)

    McLaurin, Melvin Barker

    2007-12-01

    The group-III nitrides exhibit significant spontaneous and piezoelectric polarization parallel to the [0001] direction, which are manifested as sheet charges at heterointerfaces. While polarization can be used to engineer the band-structure of a device, internal electric fields generated by polarization discontinuities can also have a number of negative consequences for the performance and design of structures utilizing heterojunctions. The most direct route to polarization free group-III nitride devices is growth on either one of the "non-polar" prismatic faces of the crystal (m-plane (1010) or a-plane (1120)) where the [0001] direction lies in the plane of any heterointerfaces. This dissertation focuses on the growth of non-polar and semi-polar GaN by MBE and on how the dominant feature of the defect structure of non-polar and semi-polar films, basal plane stacking faults, determines the properties of the reciprocal lattice and electrical transport of the films. The first part is a survey of the MBE growth of the two non-polar planes (10 10) and (1120) and three semi-polar planes (1011), (1013) and {11 22} investigated in this work. The relationship between basal plane stacking faults and broadening of the reciprocal lattice is discussed and measured with X-ray diffraction using a lateral-variant of the Williamson-Hall analysis. The electrical properties of m-plane films are investigated using Hall-effect and TLM measurements. Anisotropic mobilities were observed for both electrons and holes along with record p-type conductivities and hole concentrations. By comparison to both inversion-domain free c-plane films and stacking-fault-free free-standing m-plane GaN wafers it was determined that basal plane stacking faults were the source of both the enhanced p-type conductivity and the anisotropic carrier mobilities. Finally, we propose a possible source of anisotropic mobilities and enhanced p-type conduction in faulted films is proposed. Basal plane stacking faults are treated as heterostructures of the wurtzite and zincblende polytypes of GaN. The band parameter and polarization differences between the polytypes result in large offsets in both the conduction and valence band edges at the stacking faults. Anisotropy results from scattering from the band-edge offsets and enhanced mobility from screening due to charge accumulation at these band edge offsets.

  13. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  14. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  15. Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.

    PubMed

    Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok

    2018-06-01

    To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.

  16. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  17. Ontology method for 3DGIS modeling

    NASA Astrophysics Data System (ADS)

    Sun, Min; Chen, Jun

    2006-10-01

    Data modeling is a baffling problem in 3DGIS, no satisfied solution has been provided until today, reason come from various sides. In this paper, a new solution named "Ontology method" is proposed. GIS traditional modeling method mainly focus on geometrical modeling, i.e., try to abstract geometry primitives for objects representation, this kind modeling method show it's awkward in 3DGIS modeling process. Ontology method begins modeling from establishing a set of ontology with different levels. The essential difference of this method is to swap the position of 'spatial data' and 'attribute data' in 2DGIS modeling process for 3DGIS modeling. Ontology method has great advantages in many sides, a system based on ontology is easy to realize interoperation for communication and data mining for knowledge deduction, in addition has many other advantages.

  18. How Qualitative Methods Can be Used to Inform Model Development.

    PubMed

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  19. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  1. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  2. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    PubMed

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Sorting protein decoys by machine-learning-to-rank

    PubMed Central

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-01-01

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967

  4. Sorting protein decoys by machine-learning-to-rank.

    PubMed

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-08-17

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.

  5. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  6. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  7. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  8. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  9. Comparing model-based adaptive LMS filters and a model-free hysteresis loop analysis method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao

    2017-02-01

    The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.

  10. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  11. Model-based sensor-less wavefront aberration correction in optical coherence tomography.

    PubMed

    Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel

    2015-12-15

    Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.

  12. A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods

    PubMed Central

    Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian

    2014-01-01

    One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515

  13. Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies

    PubMed Central

    Geiser, Christian; Burns, G. Leonard; Servera, Mateu

    2014-01-01

    Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. We show that interesting incremental information about method effects can be gained from including mean structures and tests of MI across methods in MTMM models. We present a modeling framework for testing MI in the first step of a CFA-MTMM analysis. We also discuss the relevance of MI in the context of four more complex CFA-MTMM models with method factors. We focus on three recently developed multiple-indicator CFA-MTMM models for structurally different methods [the correlated traits-correlated (methods – 1), latent difference, and latent means models; Geiser et al., 2014a; Pohl and Steyer, 2010; Pohl et al., 2008] and one model for interchangeable methods (Eid et al., 2008). We demonstrate that some of these models require or imply MI by definition for a proper interpretation of trait or method factors, whereas others do not, and explain why MI may or may not be required in each model. We show that in the model for interchangeable methods, testing for MI is critical for determining whether methods can truly be seen as interchangeable. We illustrate the theoretical issues in an empirical application to an MTMM study of attention deficit and hyperactivity disorder (ADHD) with mother, father, and teacher ratings as methods. PMID:25400603

  14. Can the super model (SUMO) method improve hydrological simulations? Exploratory tests with the GR hydrological models

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2017-04-01

    Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.

  15. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment

    PubMed Central

    2014-01-01

    Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387

  16. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment.

    PubMed

    Cao, Renzhi; Wang, Zheng; Cheng, Jianlin

    2014-04-15

    Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.

  17. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  18. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    PubMed

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques such as machine learning for parameter estimation in dynamic simulation models. Upon reviewing this report in addition to using the SIMULATE checklist, the readers should be able to identify whether dynamic simulation modeling methods are appropriate to address the problem at hand and to recognize the differences of these methods from those of other, more traditional modeling approaches such as Markov models and decision trees. This report provides an overview of these modeling methods and examples of health care system problems in which such methods have been useful. The primary aim of the report was to aid decisions as to whether these simulation methods are appropriate to address specific health systems problems. The report directs readers to other resources for further education on these individual modeling methods for system interventions in the emerging field of health care delivery science and implementation. Copyright © 2015. Published by Elsevier Inc.

  19. Chemometrics-assisted spectrophotometry method for the determination of chemical oxygen demand in pulping effluent.

    PubMed

    Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu

    2011-04-01

    A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.

  20. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11

    PubMed Central

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-01-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671

  1. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  2. Comparison of the Various Methodologies Used in Studying Runoff and Sediment Load in the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Xu, M., III; Liu, X.

    2017-12-01

    In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.

  3. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  4. Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.

    PubMed

    Liu, X; Zhai, Z

    2007-12-01

    Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.

  5. A catalog of automated analysis methods for enterprise models.

    PubMed

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  6. What can formal methods offer to digital flight control systems design

    NASA Technical Reports Server (NTRS)

    Good, Donald I.

    1990-01-01

    Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.

  7. Prediction of global and local model quality in CASP8 using the ModFOLD server.

    PubMed

    McGuffin, Liam J

    2009-01-01

    The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.

  8. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  10. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1994-01-01

    The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.

  11. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    PubMed

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  12. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  13. A comparison of viscoelastic damping models

    NASA Technical Reports Server (NTRS)

    Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.

    1993-01-01

    Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.

  14. A Model-Driven Development Method for Management Information Systems

    NASA Astrophysics Data System (ADS)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  15. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  16. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  17. Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network

    NASA Astrophysics Data System (ADS)

    Fallahpour, R.; Chakouvari, S.; Askari, H.

    2015-03-01

    In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.

  18. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2016-09-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. A mesoscopic bridging scale method for fluids and coupling dissipative particle dynamics with continuum finite element method

    PubMed Central

    Kojic, Milos; Filipovic, Nenad; Tsuda, Akira

    2012-01-01

    A multiscale procedure to couple a mesoscale discrete particle model and a macroscale continuum model of incompressible fluid flow is proposed in this study. We call this procedure the mesoscopic bridging scale (MBS) method since it is developed on the basis of the bridging scale method for coupling molecular dynamics and finite element models [G.J. Wagner, W.K. Liu, Coupling of atomistic and continuum simulations using a bridging scale decomposition, J. Comput. Phys. 190 (2003) 249–274]. We derive the governing equations of the MBS method and show that the differential equations of motion of the mesoscale discrete particle model and finite element (FE) model are only coupled through the force terms. Based on this coupling, we express the finite element equations which rely on the Navier–Stokes and continuity equations, in a way that the internal nodal FE forces are evaluated using viscous stresses from the mesoscale model. The dissipative particle dynamics (DPD) method for the discrete particle mesoscale model is employed. The entire fluid domain is divided into a local domain and a global domain. Fluid flow in the local domain is modeled with both DPD and FE method, while fluid flow in the global domain is modeled by the FE method only. The MBS method is suitable for modeling complex (colloidal) fluid flows, where continuum methods are sufficiently accurate only in the large fluid domain, while small, local regions of particular interest require detailed modeling by mesoscopic discrete particles. Solved examples – simple Poiseuille and driven cavity flows illustrate the applicability of the proposed MBS method. PMID:23814322

  20. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  1. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  2. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  3. The review of dynamic monitoring technology for crop growth

    NASA Astrophysics Data System (ADS)

    Zhang, Hong-wei; Chen, Huai-liang; Zou, Chun-hui; Yu, Wei-dong

    2010-10-01

    In this paper, crop growth monitoring methods are described elaborately. The crop growth models, Netherlands-Wageningen model system, the United States-GOSSYM model and CERES models, Australia APSIM model and CCSODS model system in China, are introduced here more focus on the theories of mechanism, applications, etc. The methods and application of remote sensing monitoring methods, which based on leaf area index (LAI) and biomass were proposed by different scholars at home and abroad, are highly stressed in the paper. The monitoring methods of remote sensing coupling with crop growth models are talked out at large, including the method of "forced law" which using remote sensing retrieval state parameters as the crop growth model parameters input, and then to enhance the dynamic simulation accuracy of crop growth model and the method of "assimilation of Law" which by reducing the gap difference between the value of remote sensing retrieval and the simulated values of crop growth model and thus to estimate the initial value or parameter values to increasing the simulation accuracy. At last, the developing trend of monitoring methods are proposed based on the advantages and shortcomings in previous studies, it is assured that the combination of remote sensing with moderate resolution data of FY-3A, MODIS, etc., crop growth model, "3S" system and observation in situ are the main methods in refinement of dynamic monitoring and quantitative assessment techniques for crop growth in future.

  4. Comparison between two statistically based methods, and two physically based models developed to compute daily mean streamflow at ungaged locations in the Cedar River Basin, Iowa

    USGS Publications Warehouse

    Linhart, S. Mike; Nania, Jon F.; Christiansen, Daniel E.; Hutchinson, Kasey J.; Sanders, Curtis L.; Archfield, Stacey A.

    2013-01-01

    A variety of individuals from water resource managers to recreational users need streamflow information for planning and decisionmaking at locations where there are no streamgages. To address this problem, two statistically based methods, the Flow Duration Curve Transfer method and the Flow Anywhere method, were developed for statewide application and the two physically based models, the Precipitation Runoff Modeling-System and the Soil and Water Assessment Tool, were only developed for application for the Cedar River Basin. Observed and estimated streamflows for the two methods and models were compared for goodness of fit at 13 streamgages modeled in the Cedar River Basin by using the Nash-Sutcliffe and the percent-bias efficiency values. Based on median and mean Nash-Sutcliffe values for the 13 streamgages the Precipitation Runoff Modeling-System and Soil and Water Assessment Tool models appear to have performed similarly and better than Flow Duration Curve Transfer and Flow Anywhere methods. Based on median and mean percent bias values, the Soil and Water Assessment Tool model appears to have generally overestimated daily mean streamflows, whereas the Precipitation Runoff Modeling-System model and statistical methods appear to have underestimated daily mean streamflows. The Flow Duration Curve Transfer method produced the lowest median and mean percent bias values and appears to perform better than the other models.

  5. Automatic liver segmentation in computed tomography using general-purpose shape modeling methods.

    PubMed

    Spinczyk, Dominik; Krasoń, Agata

    2018-05-29

    Liver segmentation in computed tomography is required in many clinical applications. The segmentation methods used can be classified according to a number of criteria. One important criterion for method selection is the shape representation of the segmented organ. The aim of the work is automatic liver segmentation using general purpose shape modeling methods. As part of the research, methods based on shape information at various levels of advancement were used. The single atlas based segmentation method was used as the simplest shape-based method. This method is derived from a single atlas using the deformable free-form deformation of the control point curves. Subsequently, the classic and modified Active Shape Model (ASM) was used, using medium body shape models. As the most advanced and main method generalized statistical shape models, Gaussian Process Morphable Models was used, which are based on multi-dimensional Gaussian distributions of the shape deformation field. Mutual information and sum os square distance were used as similarity measures. The poorest results were obtained for the single atlas method. For the ASM method in 10 analyzed cases for seven test images, the Dice coefficient was above 55[Formula: see text], of which for three of them the coefficient was over 70[Formula: see text], which placed the method in second place. The best results were obtained for the method of generalized statistical distribution of the deformation field. The DICE coefficient for this method was 88.5[Formula: see text] CONCLUSIONS: This value of 88.5 [Formula: see text] Dice coefficient can be explained by the use of general-purpose shape modeling methods with a large variance of the shape of the modeled object-the liver and limitations on the size of our training data set, which was limited to 10 cases. The obtained results in presented fully automatic method are comparable with dedicated methods for liver segmentation. In addition, the deforamtion features of the model can be modeled mathematically by using various kernel functions, which allows to segment the liver on a comparable level using a smaller learning set.

  6. Method and apparatus for modeling interactions

    DOEpatents

    Xavier, Patrick G.

    2002-01-01

    The present invention provides a method and apparatus for modeling interactions that overcomes drawbacks. The method of the present invention comprises representing two bodies undergoing translations by two swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention is more robust and allows faster modeling than previous methods.

  7. Twitter's tweet method modelling and simulation

    NASA Astrophysics Data System (ADS)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  8. [RESEARCH PROGRESS OF EXPERIMENTAL ANIMAL MODELS OF AVASCULAR NECROSIS OF FEMORAL HEAD].

    PubMed

    Yu, Kaifu; Tan, Hongbo; Xu, Yongqing

    2015-12-01

    To summarize the current researches and progress on experimental animal models of avascular necrosis of the femoral head. Domestic and internation literature concerning experimental animal models of avascular necrosis of the femoral head was reviewed and analyzed. The methods to prepare the experimental animal models of avascular necrosis of the femoral head can be mainly concluded as traumatic methods (including surgical, physical, and chemical insult), and non-traumatic methods (including steroid, lipopolysaccharide, steroid combined with lipopolysaccharide, steroid combined with horse serum, etc). Each method has both merits and demerits, yet no ideal methods have been developed. There are many methods to prepare the experimental animal models of avascular necrosis of the femoral head, but proper model should be selected based on the aim of research. The establishment of ideal experimental animal models needs further research in future.

  9. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  10. Core Professionalism Education in Surgery: A Systematic Review.

    PubMed

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-03-15

    Professionalism education is one of the major elements of surgical residency education. To evaluate the studies on core professionalism education programs in surgical professionalism education. Systematic review. This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable.

  11. Modeling of Continuum Manipulators Using Pythagorean Hodograph Curves.

    PubMed

    Singh, Inderjeet; Amara, Yacine; Melingui, Achille; Mani Pathak, Pushparaj; Merzouki, Rochdi

    2018-05-10

    Research on continuum manipulators is increasingly developing in the context of bionic robotics because of their many advantages over conventional rigid manipulators. Due to their soft structure, they have inherent flexibility, which makes it a huge challenge to control them with high performances. Before elaborating a control strategy of such robots, it is essential to reconstruct first the behavior of the robot through development of an approximate behavioral model. This can be kinematic or dynamic depending on the conditions of operation of the robot itself. Kinematically, two types of modeling methods exist to describe the robot behavior; quantitative methods describe a model-based method, and qualitative methods describe a learning-based method. In kinematic modeling of continuum manipulator, the assumption of constant curvature is often considered to simplify the model formulation. In this work, a quantitative modeling method is proposed, based on the Pythagorean hodograph (PH) curves. The aim is to obtain a three-dimensional reconstruction of the shape of the continuum manipulator with variable curvature, allowing the calculation of its inverse kinematic model (IKM). It is noticed that the performances of the PH-based kinematic modeling of continuum manipulators are considerable regarding position accuracy, shape reconstruction, and time/cost of the model calculation, than other kinematic modeling methods, for two cases: free load manipulation and variable load manipulation. This modeling method is applied to the compact bionic handling assistant (CBHA) manipulator for validation. The results are compared with other IKMs developed in case of CBHA manipulator.

  12. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results show that the internal accuracy is improved, and it is shown that the improved method for point clouds modeling proposed by this paper outperforms the traditional point clouds modeling methods.

  13. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  14. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  15. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  16. The determination of third order linear models from a seventh order nonlinear jet engine model

    NASA Technical Reports Server (NTRS)

    Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex

    1989-01-01

    Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.

  17. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  18. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  19. Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model

    PubMed Central

    Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon

    2015-01-01

    In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172

  20. The rank correlated FSK model for prediction of gas radiation in non-uniform media, and its relationship to the rank correlated SLW model

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic

    2018-07-01

    Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.

  1. Core Professionalism Education in Surgery: A Systematic Review

    PubMed Central

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-01-01

    Background: Professionalism education is one of the major elements of surgical residency education. Aims: To evaluate the studies on core professionalism education programs in surgical professionalism education. Study Design: Systematic review. Methods: This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Results: Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. Conclusion: It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable. PMID:29553464

  2. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  3. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  4. A probabilistic and continuous model of protein conformational space for template-free modeling.

    PubMed

    Zhao, Feng; Peng, Jian; Debartolo, Joe; Freed, Karl F; Sosnick, Tobin R; Xu, Jinbo

    2010-06-01

    One of the major challenges with protein template-free modeling is an efficient sampling algorithm that can explore a huge conformation space quickly. The popular fragment assembly method constructs a conformation by stringing together short fragments extracted from the Protein Data Base (PDB). The discrete nature of this method may limit generated conformations to a subspace in which the native fold does not belong. Another worry is that a protein with really new fold may contain some fragments not in the PDB. This article presents a probabilistic model of protein conformational space to overcome the above two limitations. This probabilistic model employs directional statistics to model the distribution of backbone angles and 2(nd)-order Conditional Random Fields (CRFs) to describe sequence-angle relationship. Using this probabilistic model, we can sample protein conformations in a continuous space, as opposed to the widely used fragment assembly and lattice model methods that work in a discrete space. We show that when coupled with a simple energy function, this probabilistic method compares favorably with the fragment assembly method in the blind CASP8 evaluation, especially on alpha or small beta proteins. To our knowledge, this is the first probabilistic method that can search conformations in a continuous space and achieves favorable performance. Our method also generated three-dimensional (3D) models better than template-based methods for a couple of CASP8 hard targets. The method described in this article can also be applied to protein loop modeling, model refinement, and even RNA tertiary structure prediction.

  5. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  6. The Blended Finite Element Method for Multi-fluid Plasma Modeling

    DTIC Science & Technology

    2016-07-01

    Briefing Charts 3. DATES COVERED (From - To) 07 June 2016 - 01 July 2016 4. TITLE AND SUBTITLE The Blended Finite Element Method for Multi-fluid Plasma...BLENDED FINITE ELEMENT METHOD FOR MULTI-FLUID PLASMA MODELING Éder M. Sousa1, Uri Shumlak2 1ERC INC., IN-SPACE PROPULSION BRANCH (RQRS) AIR FORCE RESEARCH...MULTI-FLUID PLASMA MODEL 2 BLENDED FINITE ELEMENT METHOD Blended Finite Element Method Nodal Continuous Galerkin Modal Discontinuous Galerkin Model

  7. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  8. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  9. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  10. An introduction to using Bayesian linear regression with clinical data.

    PubMed

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality

    PubMed Central

    Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-01-01

    Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979

  12. Two-Point Turbulence Closure Applied to Variable Resolution Modeling

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Rubinstein, Robert

    2011-01-01

    Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.

  13. Model reduction methods for control design

    NASA Technical Reports Server (NTRS)

    Dunipace, K. R.

    1988-01-01

    Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.

  14. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  15. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  16. A Lattice Boltzmann Fictitious Domain Method for Modeling Red Blood Cell Deformation and Multiple-Cell Hydrodynamic Interactions in Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xing; Lin, Guang; Zou, Jianfeng

    To model red blood cell (RBC) deformation in flow, the recently developed LBM-DLM/FD method ([Shi and Lim, 2007)29], derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain methodthe fictitious domain method, is extended to employ the mesoscopic network model for simulations of red blood cell deformation. The flow is simulated by the lattice Boltzmann method with an external force, while the network model is used for modeling red blood cell deformation and the fluid-RBC interaction is enforced by the Lagrange multiplier. To validate parameters of the RBC network model, sThe stretching numerical tests on both coarse andmore » fine meshes are performed and compared with the corresponding experimental data to validate the parameters of the RBC network model. In addition, RBC deformation in pipe flow and in shear flow is simulated, revealing the capacity of the current method for modeling RBC deformation in various flows.« less

  17. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  18. Modeling the reflectance of the lunar regolith by a new method combining Monte Carlo Ray tracing and Hapke's model with application to Chang'E-1 IIM data.

    PubMed

    Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng

    2014-01-01

    In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.

  19. Comparison of temporal and spectral scattering methods using acoustically large breast models derived from magnetic resonance images.

    PubMed

    Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C

    2014-08-01

    Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.

  20. MQAPRank: improved global protein model quality assessment by learning-to-rank.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen

    2017-05-25

    Protein structure prediction has achieved a lot of progress during the last few decades and a greater number of models for a certain sequence can be predicted. Consequently, assessing the qualities of predicted protein models in perspective is one of the key components of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, which could be roughly divided into three categories: single methods, quasi-single methods and clustering (or consensus) methods. Although these methods achieve much success at different levels, accurate protein model quality assessment is still an open problem. Here, we present the MQAPRank, a global protein model quality assessment program based on learning-to-rank. The MQAPRank first sorts the decoy models by using single method based on learning-to-rank algorithm to indicate their relative qualities for the target protein. And then it takes the first five models as references to predict the qualities of other models by using average GDT_TS scores between reference models and other models. Benchmarked on CASP11 and 3DRobot datasets, the MQAPRank achieved better performances than other leading protein model quality assessment methods. Recently, the MQAPRank participated in the CASP12 under the group name FDUBio and achieved the state-of-the-art performances. The MQAPRank provides a convenient and powerful tool for protein model quality assessment with the state-of-the-art performances, it is useful for protein structure prediction and model quality assessment usages.

  1. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  2. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  3. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  5. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  6. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  7. A new ghost-node method for linking different models and initial investigations of heterogeneity and nonmatching grids

    USGS Publications Warehouse

    Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.

    2007-01-01

    A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.

  8. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  9. Gear fatigue crack prognosis using embedded model, gear dynamic model and fracture mechanics

    NASA Astrophysics Data System (ADS)

    Li, C. James; Lee, Hyungdae

    2005-07-01

    This paper presents a model-based method that predicts remaining useful life of a gear with a fatigue crack. The method consists of an embedded model to identify gear meshing stiffness from measured gear torsional vibration, an inverse method to estimate crack size from the estimated meshing stiffness; a gear dynamic model to simulate gear meshing dynamics and determine the dynamic load on the cracked tooth; and a fast crack propagation model to forecast the remaining useful life based on the estimated crack size and dynamic load. The fast crack propagation model was established to avoid repeated calculations of FEM and facilitate field deployment of the proposed method. Experimental studies were conducted to validate and demonstrate the feasibility of the proposed method for prognosis of a cracked gear.

  10. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.

    PubMed

    Skwark, Marcin J; Elofsson, Arne

    2013-07-15

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.

  11. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    PubMed

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  12. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  13. Comparison of Multiscale Method of Cells-Based Models for Predicting Elastic Properties of Filament Wound C/C-SiC

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem

    2017-01-01

    Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.

  14. 3D anisotropic modeling and identification for airborne EM systems based on the spectral-element method

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing

    2017-09-01

    The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic geology.

  15. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    PubMed

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  16. Intercomparison of methods of coupling between convection and large-scale circulation: 2. Comparison over nonuniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2016-03-18

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  17. Intercomparison of methods of coupling between convection and large‐scale circulation: 2. Comparison over nonuniform surface conditions

    PubMed Central

    Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.

    2016-01-01

    Abstract As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large‐scale dynamics in a set of cloud‐resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative‐convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large‐scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column‐relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large‐scale velocity profiles which are smoother and less top‐heavy compared to those produced by the WTG simulations. These large‐scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two‐way feedback between convection and the large‐scale circulation. PMID:27642501

  18. Calculation methods study on hot spot stress of new girder structure detail

    NASA Astrophysics Data System (ADS)

    Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing

    2017-10-01

    To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.

  19. Tensor renormalization group methods for spin and gauge models

    NASA Astrophysics Data System (ADS)

    Zou, Haiyuan

    The analysis of the error of perturbative series by comparing it to the exact solution is an important tool to understand the non-perturbative physics of statistical models. For some toy models, a new method can be used to calculate higher order weak coupling expansion and modified perturbation theory can be constructed. However, it is nontrivial to generalize the new method to understand the critical behavior of high dimensional spin and gauge models. Actually, it is a big challenge in both high energy physics and condensed matter physics to develop accurate and efficient numerical algorithms to solve these problems. In this thesis, one systematic way named tensor renormalization group method is discussed. The applications of the method to several spin and gauge models on a lattice are investigated. theoretically, the new method allows one to write an exact representation of the partition function of models with local interactions. E.g. O(N) models, Z2 gauge models and U(1) gauge models. Practically, by using controllable approximations, results in both finite volume and the thermodynamic limit can be obtained. Another advantage of the new method is that it is insensitive to sign problems for models with complex coupling and chemical potential. Through the new approach, the Fisher's zeros of the 2D O(2) model in the complex coupling plane can be calculated and the finite size scaling of the results agrees well with the Kosterlitz-Thouless assumption. Applying the method to the O(2) model with a chemical potential, new phase diagram of the models can be obtained. The structure of the tensor language may provide a new tool to understand phase transition properties in general.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  1. Model-free simulations of turbulent reactive flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman

    1989-01-01

    The current computational methods for solving transport equations of turbulent reacting single-phase flows are critically reviewed, with primary attention given to those methods that lead to model-free simulations. In particular, consideration is given to direct numerical simulations using spectral (Galerkin) and pseudospectral (collocation) methods, spectral element methods, and Lagrangian methods. The discussion also covers large eddy simulations and turbulence modeling.

  2. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  3. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  4. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    PubMed

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  5. Numerical bifurcation analysis of immunological models with time delays

    NASA Astrophysics Data System (ADS)

    Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady

    2005-12-01

    In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.

  6. Inattentive Drivers: Making the Solution Method the Model

    ERIC Educational Resources Information Center

    McCartney, Mark

    2003-01-01

    A simple car following model based on the solution of coupled ordinary differential equations is considered. The model is solved using Euler's method and this method of solution is itself interpreted as a mathematical model for car following. Examples of possible classroom use are given. (Contains 6 figures.)

  7. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  8. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    NASA Technical Reports Server (NTRS)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  9. Effective Biot theory and its generalization to poroviscoelastic models

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark

    2018-02-01

    A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.

  10. Directions for computational mechanics in automotive crashworthiness

    NASA Technical Reports Server (NTRS)

    Bennett, James A.; Khalil, T. B.

    1993-01-01

    The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.

  11. Directions for computational mechanics in automotive crashworthiness

    NASA Astrophysics Data System (ADS)

    Bennett, James A.; Khalil, T. B.

    1993-08-01

    The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.

  12. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.

    2007-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  13. Development of Aeroservoelastic Analytical Models and Gust Load Alleviation Control Laws of a SensorCraft Wind-Tunnel Model Using Measured Data

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.

    2006-01-01

    Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.

  14. On-line algorithms for forecasting hourly loads of an electric utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vemuri, S.; Huang, W.L.; Nelson, D.J.

    A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less

  15. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  16. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  17. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  18. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  19. Comparison of Numerical Modeling Methods for Soil Vibration Cutting

    NASA Astrophysics Data System (ADS)

    Jiang, Jiandong; Zhang, Enguang

    2018-01-01

    In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.

  20. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  1. On the feasibility of a transient dynamic design analysis

    NASA Astrophysics Data System (ADS)

    Cunniff, Patrick F.; Pohland, Robert D.

    1993-05-01

    The Dynamic Design Analysis Method has been used for the past 30 years as part of the Navy's efforts to shock-harden heavy shipboard equipment. This method which has been validated several times employs normal mode theory and design shock values. This report examines the degree of success that may be achieved by using simple equipment-vehicle models that produce time history responses which are equivalent to the responses that would be achieved using spectral design values employed by the Dynamic Design Analysis Method. These transient models are constructed by attaching the equipment's modal oscillators to the vehicle which is composed of rigid masses and elastic springs. Two methods have been developed for constructing these transient models. Each method generates the parameters of the vehicles so as to approximate the required damaging effects, such that the transient model is excited by an idealized impulse applied to the vehicle mass to which the equipment modal oscillators are attached. The first method called the Direct Modeling Method, is limited to equipment with at most three-degrees of freedom and the vehicle consists of a single lumped mass and spring. The Optimization Modeling Method, which is based on the simplex method for optimization, has been used successfully with a variety of vehicle models and equipment sizes.

  2. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  3. ICF target 2D modeling using Monte Carlo SNB electron thermal transport in DRACO

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2016-10-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup diffusion electron thermal transport method is adapted into a Monte Carlo (MC) transport method to better model angular and long mean free path non-local effects. The MC model was first implemented in the 1D LILAC code to verify consistency with the iSNB model. Implementation of the MC SNB model in the 2D DRACO code enables higher fidelity non-local thermal transport modeling in 2D implosions such as polar drive experiments on NIF. The final step is to optimize the MC model by hybridizing it with a MC version of the iSNB diffusion method. The hybrid method will combine the efficiency of a diffusion method in intermediate mean free path regions with the accuracy of a transport method in long mean free path regions allowing for improved computational efficiency while maintaining accuracy. Work to date on the method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  4. Progressive Failure of a Unidirectional Fiber-Reinforced Composite Using the Method of Cells: Discretization Objective Computational Results

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  5. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Forward problem solution as the operator of filtered and back projection matrix to reconstruct the various method of data collection and the object element model in electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ain, Khusnul; Physics Department - Airlangga University, Surabaya – Indonesia, khusnulainunair@yahoo.com; Kurniadi, Deddy

    2015-04-16

    Back projection reconstruction has been implemented to get the dynamical image in electrical impedance tomography. However the implementation is still limited in method of adjacent data collection and circular object element model. The study aims to develop the methods of back projection as reconstruction method that has the high speed, accuracy, and flexibility, which can be used for various methods of data collection and model of the object element. The proposed method uses the forward problem solution as the operator of filtered and back projection matrix. This is done through a simulation study on several methods of data collection andmore » various models of the object element. The results indicate that the developed method is capable of producing images, fastly and accurately for reconstruction of the various methods of data collection and models of the object element.« less

  7. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  8. Automatic load forecasting. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, D.J.; Vemuri, S.

    A method which lends itself to on-line forecasting of hourly electric loads is presented and the results of its use are compared to models developed using the Box-Jenkins method. The method consists of processing the historical hourly loads with a sequential least-squares estimator to identify a finite order autoregressive model which in turn is used to obtain a parsimonious autoregressive-moving average model. A procedure is also defined for incorporating temperature as a variable to improve forecasts where loads are temperature dependent. The method presented has several advantages in comparison to the Box-Jenkins method including much less human intervention and improvedmore » model identification. The method has been tested using three-hourly data from the Lincoln Electric System, Lincoln, Nebraska. In the exhaustive analyses performed on this data base this method produced significantly better results than the Box-Jenkins method. The method also proved to be more robust in that greater confidence could be placed in the accuracy of models based upon the various measures available at the identification stage.« less

  9. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  10. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  11. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  12. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  13. [Progression on finite element modeling method in scoliosis].

    PubMed

    Fan, Ning; Zang, Lei; Hai, Yong; Du, Peng; Yuan, Shuo

    2018-04-25

    Scoliosis is a complex spinal three-dimensional malformation with complicated pathogenesis, often associated with complications as thoracic deformity and shoulder imbalance. Because the acquisition of specimen or animal models are difficult, the biomechanical study of scoliosis is limited. In recent years, along with the development of the computer technology, software and image, the technology of establishing a finite element model of human spine is maturing and it has been providing strong support for the research of pathogenesis of scoliosis, the design and application of brace, and the selection of surgical methods. The finite element model method is gradually becoming an important tool in the biomechanical study of scoliosis. Establishing a high quality finite element model is the basis of analysis and future study. However, the finite element modeling process can be complex and modeling methods are greatly varied. Choosing the appropriate modeling method according to research objectives has become researchers' primary task. In this paper, the author reviews the national and international literature in recent years and concludes the finite element modeling methods in scoliosis, including data acquisition, establishment of the geometric model, the material properties, parameters setting, the validity of the finite element model validation and so on. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.

  14. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  15. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.

    PubMed

    Wolf, Elizabeth Skubak; Anderson, David F

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.

  16. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  17. Modeling the Reflectance of the Lunar Regolith by a New Method Combining Monte Carlo Ray Tracing and Hapke's Model with Application to Chang'E-1 IIM Data

    PubMed Central

    Wu, Yunzhao; Tang, Zesheng

    2014-01-01

    In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface. PMID:24526892

  18. Mechanical modeling for magnetorheological elastomer isolators based on constitutive equations and electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Dong, Xufeng; Li, Luyu; Ou, Jinping

    2018-06-01

    As constitutive models are too complicated and existing mechanical models lack universality, these models are beyond satisfaction for magnetorheological elastomer (MRE) devices. In this article, a novel universal method is proposed to build concise mechanical models. Constitutive model and electromagnetic analysis were applied in this method to ensure universality, while a series of derivations and simplifications were carried out to obtain a concise formulation. To illustrate the proposed modeling method, a conical MRE isolator was introduced. Its basic mechanical equations were built based on equilibrium, deformation compatibility, constitutive equations and electromagnetic analysis. An iteration model and a highly efficient differential equation editor based model were then derived to solve the basic mechanical equations. The final simplified mechanical equations were obtained by re-fitting the simulations with a novel optimal algorithm. In the end, verification test of the isolator has proved the accuracy of the derived mechanical model and the modeling method.

  19. Evaluating the impact of field-scale management strategies on sediment transport to the watershed outlet.

    PubMed

    Sommerlot, Andrew R; Pouyan Nejadhashemi, A; Woznicki, Sean A; Prohaska, Michael D

    2013-10-15

    Non-point source pollution from agricultural lands is a significant contributor of sediment pollution in United States lakes and streams. Therefore, quantifying the impact of individual field management strategies at the watershed-scale provides valuable information to watershed managers and conservation agencies to enhance decision-making. In this study, four methods employing some of the most cited models in field and watershed scale analysis were compared to find a practical yet accurate method for evaluating field management strategies at the watershed outlet. The models used in this study including field-scale model (the Revised Universal Soil Loss Equation 2 - RUSLE2), spatially explicit overland sediment delivery models (SEDMOD), and a watershed-scale model (Soil and Water Assessment Tool - SWAT). These models were used to develop four modeling strategies (methods) for the River Raisin watershed: Method 1) predefined field-scale subbasin and reach layers were used in SWAT model; Method 2) subbasin-scale sediment delivery ratio was employed; Method 3) results obtained from the field-scale RUSLE2 model were incorporated as point source inputs to the SWAT watershed model; and Method 4) a hybrid solution combining analyses from the RUSLE2, SEDMOD, and SWAT models. Method 4 was selected as the most accurate among the studied methods. In addition, the effectiveness of six best management practices (BMPs) in terms of the water quality improvement and associated cost were assessed. Economic analysis was performed using Method 4, and producer requested prices for BMPs were compared with prices defined by the Environmental Quality Incentives Program (EQIP). On a per unit area basis, producers requested higher prices than EQIP in four out of six BMP categories. Meanwhile, the true cost of sediment reduction at the field and watershed scales was greater than EQIP in five of six BMP categories according to producer requested prices. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Noninvasive and fast measurement of blood glucose in vivo by near infrared (NIR) spectroscopy

    NASA Astrophysics Data System (ADS)

    Jintao, Xue; Liming, Ye; Yufei, Liu; Chunyan, Li; Han, Chen

    2017-05-01

    This research was to develop a method for noninvasive and fast blood glucose assay in vivo. Near-infrared (NIR) spectroscopy, a more promising technique compared to other methods, was investigated in rats with diabetes and normal rats. Calibration models are generated by two different multivariate strategies: partial least squares (PLS) as linear regression method and artificial neural networks (ANN) as non-linear regression method. The PLS model was optimized individually by considering spectral range, spectral pretreatment methods and number of model factors, while the ANN model was studied individually by selecting spectral pretreatment methods, parameters of network topology, number of hidden neurons, and times of epoch. The results of the validation showed the two models were robust, accurate and repeatable. Compared to the ANN model, the performance of the PLS model was much better, with lower root mean square error of validation (RMSEP) of 0.419 and higher correlation coefficients (R) of 96.22%.

  1. Space-time variation of respiratory cancers in South Carolina: a flexible multivariate mixture modeling approach to risk estimation.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Kirby, Russell S; Faes, Christel; Aregay, Mehreteab; Watjou, Kevin

    2017-01-01

    Many types of cancer have an underlying spatiotemporal distribution. Spatiotemporal mixture modeling can offer a flexible approach to risk estimation via the inclusion of latent variables. In this article, we examine the application and benefits of using four different spatiotemporal mixture modeling methods in the modeling of cancer of the lung and bronchus as well as "other" respiratory cancer incidences in the state of South Carolina. Of the methods tested, no single method outperforms the other methods; which method is best depends on the cancer under consideration. The lung and bronchus cancer incidence outcome is best described by the univariate modeling formulation, whereas the "other" respiratory cancer incidence outcome is best described by the multivariate modeling formulation. Spatiotemporal multivariate mixture methods can aid in the modeling of cancers with small and sparse incidences when including information from a related, more common type of cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  3. Using the QUAIT Model to Effectively Teach Research Methods Curriculum to Master's-Level Students

    ERIC Educational Resources Information Center

    Hamilton, Nancy J.; Gitchel, Dent

    2017-01-01

    Purpose: To apply Slavin's model of effective instruction to teaching research methods to master's-level students. Methods: Barriers to the scientist-practitioner model (student research experience, confidence, and utility value pertaining to research methods as well as faculty research and pedagogical incompetencies) are discussed. Results: The…

  4. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  5. An optimization model for metabolic pathways.

    PubMed

    Planes, F J; Beasley, J E

    2009-10-15

    Different mathematical methods have emerged in the post-genomic era to determine metabolic pathways. These methods can be divided into stoichiometric methods and path finding methods. In this paper we detail a novel optimization model, based upon integer linear programming, to determine metabolic pathways. Our model links reaction stoichiometry with path finding in a single approach. We test the ability of our model to determine 40 annotated Escherichia coli metabolic pathways. We show that our model is able to determine 36 of these 40 pathways in a computationally effective manner.

  6. A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina

    2010-08-26

    In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less

  7. The development rainfall forecasting using kalman filter

    NASA Astrophysics Data System (ADS)

    Zulfi, Mohammad; Hasan, Moh.; Dwidja Purnomo, Kosala

    2018-04-01

    Rainfall forecasting is very interesting for agricultural planing. Rainfall information is useful to make decisions about the plan planting certain commodities. In this studies, the rainfall forecasting by ARIMA and Kalman Filter method. Kalman Filter method is used to declare a time series model of which is shown in the form of linear state space to determine the future forecast. This method used a recursive solution to minimize error. The rainfall data in this research clustered by K-means clustering. Implementation of Kalman Filter method is for modelling and forecasting rainfall in each cluster. We used ARIMA (p,d,q) to construct a state space for KalmanFilter model. So, we have four group of the data and one model in each group. In conclusions, Kalman Filter method is better than ARIMA model for rainfall forecasting in each group. It can be showed from error of Kalman Filter method that smaller than error of ARIMA model.

  8. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  9. A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line

    NASA Technical Reports Server (NTRS)

    Otoshi, Tom Y.

    1994-01-01

    A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.

  10. New methods to characterize site amplification

    USGS Publications Warehouse

    Safak, Erdal

    1993-01-01

    Methods alternative to spectral ratios are introduced to characterize site amplification. The methods are developed by using a range of models, from the simple constant amplification model to the time-varying filter model. Examples are given for each model by using a pair of rock- and soil-site recordings from the Loma Prieta earthquake.

  11. Modelling of nanoscale quantum tunnelling structures using algebraic topology method

    NASA Astrophysics Data System (ADS)

    Sankaran, Krishnaswamy; Sairam, B.

    2018-05-01

    We have modelled nanoscale quantum tunnelling structures using Algebraic Topology Method (ATM). The accuracy of ATM is compared to the analytical solution derived based on the wave nature of tunnelling electrons. ATM provides a versatile, fast, and simple model to simulate complex structures. We are currently expanding the method for modelling electrodynamic systems.

  12. Implementation of a Smeared Crack Band Model in a Micromechanics Framework

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.

    2012-01-01

    The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.

  13. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  14. A fast mass spring model solver for high-resolution elastic objects

    NASA Astrophysics Data System (ADS)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  15. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    NASA Astrophysics Data System (ADS)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  16. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  17. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  18. Dynamic characteristics of oxygen consumption.

    PubMed

    Ye, Lin; Argha, Ahmadreza; Yu, Hairong; Celler, Branko G; Nguyen, Hung T; Su, Steven

    2018-04-23

    Previous studies have indicated that oxygen uptake ([Formula: see text]) is one of the most accurate indices for assessing the cardiorespiratory response to exercise. In most existing studies, the response of [Formula: see text] is often roughly modelled as a first-order system due to the inadequate stimulation and low signal to noise ratio. To overcome this difficulty, this paper proposes a novel nonparametric kernel-based method for the dynamic modelling of [Formula: see text] response to provide a more robust estimation. Twenty healthy non-athlete participants conducted treadmill exercises with monotonous stimulation (e.g., single step function as input). During the exercise, [Formula: see text] was measured and recorded by a popular portable gas analyser ([Formula: see text], COSMED). Based on the recorded data, a kernel-based estimation method was proposed to perform the nonparametric modelling of [Formula: see text]. For the proposed method, a properly selected kernel can represent the prior modelling information to reduce the dependence of comprehensive stimulations. Furthermore, due to the special elastic net formed by [Formula: see text] norm and kernelised [Formula: see text] norm, the estimations are smooth and concise. Additionally, the finite impulse response based nonparametric model which estimated by the proposed method can optimally select the order and fit better in terms of goodness-of-fit comparing to classical methods. Several kernels were introduced for the kernel-based [Formula: see text] modelling method. The results clearly indicated that the stable spline (SS) kernel has the best performance for [Formula: see text] modelling. Particularly, based on the experimental data from 20 participants, the estimated response from the proposed method with SS kernel was significantly better than the results from the benchmark method [i.e., prediction error method (PEM)] ([Formula: see text] vs [Formula: see text]). The proposed nonparametric modelling method is an effective method for the estimation of the impulse response of VO 2 -Speed system. Furthermore, the identified average nonparametric model method can dynamically predict [Formula: see text] response with acceptable accuracy during treadmill exercise.

  19. Nonlinear structural joint model updating based on instantaneous characteristics of dynamic responses

    NASA Astrophysics Data System (ADS)

    Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin

    2016-08-01

    This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.

  20. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  1. On Multifunctional Collaborative Methods in Engineering Science

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2001-01-01

    Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.

  2. The Use of Object-Oriented Analysis Methods in Surety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.

    1999-05-01

    Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less

  3. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  4. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  5. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  6. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  7. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  8. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  9. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  10. A method for modeling oxygen diffusion in an agent-based model with application to host-pathogen infection

    DOE PAGES

    Plimpton, Steven J.; Sershen, Cheryl L.; May, Elebeoba E.

    2015-01-01

    This paper describes a method for incorporating a diffusion field modeling oxygen usage and dispersion in a multi-scale model of Mycobacterium tuberculosis (Mtb) infection mediated granuloma formation. We implemented this method over a floating-point field to model oxygen dynamics in host tissue during chronic phase response and Mtb persistence. The method avoids the requirement of satisfying the Courant-Friedrichs-Lewy (CFL) condition, which is necessary in implementing the explicit version of the finite-difference method, but imposes an impractical bound on the time step. Instead, diffusion is modeled by a matrix-based, steady state approximate solution to the diffusion equation. Moreover, presented in figuremore » 1 is the evolution of the diffusion profiles of a containment granuloma over time.« less

  11. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  12. Kinetic analysis of non-isothermal solid-state reactions: multi-stage modeling without assumptions in the reaction mechanism.

    PubMed

    Pomerantsev, Alexey L; Kutsenova, Alla V; Rodionova, Oxana Ye

    2017-02-01

    A novel non-linear regression method for modeling non-isothermal thermogravimetric data is proposed. Experiments for several heating rates are analyzed simultaneously. The method is applicable to complex multi-stage processes when the number of stages is unknown. Prior knowledge of the type of kinetics is not required. The main idea is a consequent estimation of parameters when the overall model is successively changed from one level of modeling to another. At the first level, the Avrami-Erofeev functions are used. At the second level, the Sestak-Berggren functions are employed with the goal to broaden the overall model. The method is tested using both simulated and real-world data. A comparison of the proposed method with a recently published 'model-free' deconvolution method is presented.

  13. Multi-parametric centrality method for graph network models

    NASA Astrophysics Data System (ADS)

    Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna

    2018-04-01

    The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.

  14. Piecewise multivariate modelling of sequential metabolic profiling data.

    PubMed

    Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan

    2008-02-19

    Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.

  15. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  16. A new adaptive estimation method of spacecraft thermal mathematical model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Akita, T.; Takaki, R.; Shima, E.

    2012-04-01

    An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.

  17. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Gibson, Richard L.

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  18. Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  19. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE PAGES

    Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...

    2015-04-14

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  20. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  1. Numerical Modelling of Foundation Slabs with use of Schur Complement Method

    NASA Astrophysics Data System (ADS)

    Koktan, Jiří; Brožovský, Jiří

    2017-10-01

    The paper discusses numerical modelling of foundation slabs with use of advanced numerical approaches, which are suitable for parallel processing. The solution is based on the Finite Element Method with the slab-type elements. The subsoil is modelled with use of Winklertype contact model (as an alternative a multi-parameter model can be used). The proposed modelling approach uses the Schur Complement method to speed-up the computations of the problem. The method is based on a special division of the analyzed model to several substructures. It adds some complexity to the numerical procedures, especially when subsoil models are used inside the finite element method solution. In other hand, this method makes possible a fast solution of large models but it introduces further problems to the process. Thus, the main aim of this paper is to verify that such method can be successfully used for this type of problem. The most suitable finite elements will be discussed, there will be also discussion related to finite element mesh and limitations of its construction for such problem. The core approaches of the implementation of the Schur Complement Method for this type of the problem will be also presented. The proposed approach was implemented in the form of a computer program, which will be also briefly introduced. There will be also presented results of example computations, which prove the speed-up of the solution - there will be shown important speed-up of solution even in the case of on-parallel processing and the ability of bypass size limitations of numerical models with use of the discussed approach.

  2. Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.

    PubMed

    Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P

    2016-04-15

    We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.

  3. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  4. Parameters estimation using the first passage times method in a jump-diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com

    2016-06-02

    The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.

  5. Latent Growth and Dynamic Structural Equation Models.

    PubMed

    Grimm, Kevin J; Ram, Nilam

    2018-05-07

    Latent growth models make up a class of methods to study within-person change-how it progresses, how it differs across individuals, what are its determinants, and what are its consequences. Latent growth methods have been applied in many domains to examine average and differential responses to interventions and treatments. In this review, we introduce the growth modeling approach to studying change by presenting different models of change and interpretations of their model parameters. We then apply these methods to examining sex differences in the development of binge drinking behavior through adolescence and into adulthood. Advances in growth modeling methods are then discussed and include inherently nonlinear growth models, derivative specification of growth models, and latent change score models to study stochastic change processes. We conclude with relevant design issues of longitudinal studies and considerations for the analysis of longitudinal data.

  6. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  7. Modeling hemoglobin at optical frequency using the unconditionally stable fundamental ADI-FDTD method.

    PubMed

    Heh, Ding Yu; Tan, Eng Leong

    2011-04-12

    This paper presents the modeling of hemoglobin at optical frequency (250 nm - 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin.

  8. Modeling hemoglobin at optical frequency using the unconditionally stable fundamental ADI-FDTD method

    PubMed Central

    Heh, Ding Yu; Tan, Eng Leong

    2011-01-01

    This paper presents the modeling of hemoglobin at optical frequency (250 nm – 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin. PMID:21559129

  9. Spatial modelling of disease using data- and knowledge-driven approaches.

    PubMed

    Stevens, Kim B; Pfeiffer, Dirk U

    2011-09-01

    The purpose of spatial modelling in animal and public health is three-fold: describing existing spatial patterns of risk, attempting to understand the biological mechanisms that lead to disease occurrence and predicting what will happen in the medium to long-term future (temporal prediction) or in different geographical areas (spatial prediction). Traditional methods for temporal and spatial predictions include general and generalized linear models (GLM), generalized additive models (GAM) and Bayesian estimation methods. However, such models require both disease presence and absence data which are not always easy to obtain. Novel spatial modelling methods such as maximum entropy (MAXENT) and the genetic algorithm for rule set production (GARP) require only disease presence data and have been used extensively in the fields of ecology and conservation, to model species distribution and habitat suitability. Other methods, such as multicriteria decision analysis (MCDA), use knowledge of the causal factors of disease occurrence to identify areas potentially suitable for disease. In addition to their less restrictive data requirements, some of these novel methods have been shown to outperform traditional statistical methods in predictive ability (Elith et al., 2006). This review paper provides details of some of these novel methods for mapping disease distribution, highlights their advantages and limitations, and identifies studies which have used the methods to model various aspects of disease distribution. Copyright © 2011. Published by Elsevier Ltd.

  10. Estimating habitat volume of living resources using three-dimensional circulation and biogeochemical models

    NASA Astrophysics Data System (ADS)

    Smith, Katharine A.; Schlag, Zachary; North, Elizabeth W.

    2018-07-01

    Coupled three-dimensional circulation and biogeochemical models predict changes in water properties that can be used to define fish habitat, including physiologically important parameters such as temperature, salinity, and dissolved oxygen. However, methods for calculating the volume of habitat defined by the intersection of multiple water properties are not well established for coupled three-dimensional models. The objectives of this research were to examine multiple methods for calculating habitat volume from three-dimensional model predictions, select the most robust approach, and provide an example application of the technique. Three methods were assessed: the "Step," "Ruled Surface", and "Pentahedron" methods, the latter of which was developed as part of this research. Results indicate that the analytical Pentahedron method is exact, computationally efficient, and preserves continuity in water properties between adjacent grid cells. As an example application, the Pentahedron method was implemented within the Habitat Volume Model (HabVol) using output from a circulation model with an Arakawa C-grid and physiological tolerances of juvenile striped bass (Morone saxatilis). This application demonstrates that the analytical Pentahedron method can be successfully applied to calculate habitat volume using output from coupled three-dimensional circulation and biogeochemical models, and it indicates that the Pentahedron method has wide application to aquatic and marine systems for which these models exist and physiological tolerances of organisms are known.

  11. Three-dimensional full waveform inversion of short-period teleseismic wavefields based upon the SEM-DSM hybrid method

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi

    2015-08-01

    We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.

  12. Computer support for physiological cell modelling using an ontology on cell physiology.

    PubMed

    Takao, Shimayoshi; Kazuhiro, Komurasaki; Akira, Amano; Takeshi, Iwashita; Masanori, Kanazawa; Tetsuya, Matsuda

    2006-01-01

    The development of electrophysiological whole cell models to support the understanding of biological mechanisms is increasing rapidly. Due to the complexity of biological systems, comprehensive cell models, which are composed of many imported sub-models of functional elements, can get quite complicated as well, making computer modification difficult. Here, we propose a computer support to enhance structural changes of cell models, employing the markup languages CellML and our original PMSML (physiological model structure markup language), in addition to a new ontology for cell physiological modelling. In particular, a method to make references from CellML files to the ontology and a method to assist manipulation of model structures using markup languages together with the ontology are reported. Using these methods three software utilities, including a graphical model editor, are implemented. Experimental results proved that these methods are effective for the modification of electrophysiological models.

  13. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  14. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  15. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  16. Methods of the aerodynamical experiments with simulation of massflow-traction ratio of the power unit

    NASA Astrophysics Data System (ADS)

    Lokotko, A. V.

    2016-10-01

    Modeling massflow-traction characteristics of the power unit (PU) may be of interest in the study of aerodynamic characteristics (ADC) aircraft models with full dynamic likeness, and in the study of the effect of interference PU. These studies require the use of a number of processing methods. These include: 1) The method of delivery of the high-pressure body of jets model engines on the sensitive part of the aerodynamic balance. 2) The method of estimate accuracy and reliability of measurement thrust generated by the jet device. 3) The method of implementation of the simulator SU in modeling the external contours of the nacelle, and the conditions at the inlet and outlet. 4) The method of determining the traction simulator PU. 5) The method of determining the interference effect from the work of power unit on the ADC of model. 6) The method of producing hot jets of jet engines. The paper examines implemented in ITAM methodology applied to testing in a supersonic wind tunnel T-313.

  17. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  18. The Mediated MIMIC Model for Understanding the Underlying Mechanism of DIF

    ERIC Educational Resources Information Center

    Cheng, Ying; Shao, Can; Lathrop, Quinn N.

    2016-01-01

    Due to its flexibility, the multiple-indicator, multiple-causes (MIMIC) model has become an increasingly popular method for the detection of differential item functioning (DIF). In this article, we propose the mediated MIMIC model method to uncover the underlying mechanism of DIF. This method extends the usual MIMIC model by including one variable…

  19. Thermal lattice BGK models for fluid dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Jian

    1998-11-01

    As an alternative in modeling fluid dynamics, the Lattice Boltzmann method has attracted considerable attention. In this thesis, we shall present a general form of thermal Lattice BGK. This form can handle large differences in density, temperature, and high Mach number. This generalized method can easily model gases with different adiabatic index values. The numerical transport coefficients of this model are estimated both theoretically and numerically. Their dependency on the sizes of integration steps in time and space, and on the flow velocity and temperature, are studied and compared with other established CFD methods. This study shows that the numerical viscosity of the Lattice Boltzmann method depends linearly on the space interval, and on the flow velocity as well for supersonic flow. This indicates this method's limitation in modeling high Reynolds number compressible thermal flow. On the other hand, the Lattice Boltzmann method shows promise in modeling micro-flows, i.e., gas flows in micron-sized devices. A two-dimensional code has been developed based on the conventional thermal lattice BGK model, with some modifications and extensions for micro- flows and wall-fluid interactions. Pressure-driven micro- channel flow has been simulated. Results are compared with experiments and simulations using other methods, such as a spectral element code using slip boundary condition with Navier-Stokes equations and a Direct Simulation Monte Carlo (DSMC) method.

  20. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  1. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  2. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  3. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  4. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  5. Discrete-time modelling of musical instruments

    NASA Astrophysics Data System (ADS)

    Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.

  6. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  7. Full-Potential Modeling of Blade-Vortex Interactions. Degree awarded by George Washington Univ., Feb. 1987

    NASA Technical Reports Server (NTRS)

    Jones, Henry E.

    1997-01-01

    A study of the full-potential modeling of a blade-vortex interaction was made. A primary goal of this study was to investigate the effectiveness of the various methods of modeling the vortex. The model problem restricts the interaction to that of an infinite wing with an infinite line vortex moving parallel to its leading edge. This problem provides a convenient testing ground for the various methods of modeling the vortex while retaining the essential physics of the full three-dimensional interaction. A full-potential algorithm specifically tailored to solve the blade-vortex interaction (BVI) was developed to solve this problem. The basic algorithm was modified to include the effect of a vortex passing near the airfoil. Four different methods of modeling the vortex were used: (1) the angle-of-attack method, (2) the lifting-surface method, (3) the branch-cut method, and (4) the split-potential method. A side-by-side comparison of the four models was conducted. These comparisons included comparing generated velocity fields, a subcritical interaction, and a critical interaction. The subcritical and critical interactions are compared with experimentally generated results. The split-potential model was used to make a survey of some of the more critical parameters which affect the BVI.

  8. Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth

    2014-12-01

    There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.

  9. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    PubMed

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  10. A forward model-based validation of cardiovascular system identification

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.

    2001-01-01

    We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.

  11. A Spectral Method for Spatial Downscaling

    PubMed Central

    Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.

    2014-01-01

    Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037

  12. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  13. Methods of the working processes modelling of an internal combustion engine by an ANSYS IC Engine module

    NASA Astrophysics Data System (ADS)

    Kurchatkin, I. V.; Gorshkalev, A. A.; Blagin, E. V.

    2017-01-01

    This article deals with developed methods of the working processes modelling in the combustion chamber of an internal combustion engine (ICE). Methods includes description of the preparation of a combustion chamber 3-d model, setting of the finite-element mesh, boundary condition setting and solution customization. Aircraft radial engine M-14 was selected for modelling. The cycle of cold blowdown in the ANSYS IC Engine software was carried out. The obtained data were compared to results of known calculation methods. A method of engine’s induction port improvement was suggested.

  14. Conditioning of Model Identification Task in Immune Inspired Optimizer SILO

    NASA Astrophysics Data System (ADS)

    Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.

    2009-10-01

    Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.

  15. Modeling vibration response and damping of cables and cabled structures

    NASA Astrophysics Data System (ADS)

    Spak, Kaitlin S.; Agnes, Gregory S.; Inman, Daniel J.

    2015-02-01

    In an effort to model the vibration response of cabled structures, the distributed transfer function method is developed to model cables and a simple cabled structure. The model includes shear effects, tension, and hysteretic damping for modeling of helical stranded cables, and includes a method for modeling cable attachment points using both linear and rotational damping and stiffness. The damped cable model shows agreement with experimental data for four types of stranded cables, and the damped cabled beam model shows agreement with experimental data for the cables attached to a beam structure, as well as improvement over the distributed mass method for cabled structure modeling.

  16. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  17. Jennifer van Rij | NREL

    Science.gov Websites

    Jennifer.Vanrij@nrel.gov | 303-384-7180 Jennifer's expertise is in developing computational modeling methods for collaboratively developing numerical modeling methods to simulate the hydrodynamic, structural dynamic, power -elastic interactions. Her other diverse work experiences include developing numerical modeling methods for

  18. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  19. A Mathematical Model for Railway Control Systems

    NASA Technical Reports Server (NTRS)

    Hoover, D. N.

    1996-01-01

    We present a general method for modeling safety aspects of railway control systems. Using our modeling method, one can progressively refine an abstract railway safety model, sucessively adding layers of detail about how a real system actually operates, while maintaining a safety property that refines the original abstract safety property. This method supports a top-down approach to specification of railway control systems and to proof of a variety of safety-related properties. We demonstrate our method by proving safety of the classical block control system.

  20. A Review of Methods for Missing Data.

    ERIC Educational Resources Information Center

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  1. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, Ajmal

    1987-01-01

    The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.

  2. Comparison of survival between radiation therapy and trans-oral laser microsurgery for early glottic cancer patients; a retrospective cohort study.

    PubMed

    De Santis, R J; Poon, I; Lee, J; Karam, I; Enepekides, D J; Higgins, K M

    2016-08-02

    The literature reports various treatment methodologies, such as trans-oral laser microsurgery, radiation therapy, total/partial laryngectomies, and concurrent radiation chemotherapy for patients with early larynx cancer. However, at the forefront of early glottis treatment is trans-oral laser microsurgery and radiation therapy, likely due to better functional and survival outcomes. Here we conduct the largest Canadian head-to-head comparison of consecutive patients treated with either radiation therapy or trans-oral laser microsurgery. Additionally, we compare these two treatments and their 5-year survival rates post treatment to add to the existing literature. Charts of patients who were diagnosed with early glottic cancer between 2006 and 2013 were reviewed. Seventy-five patients were identified, and split into 2 groups based on their primary treatment, trans-oral laser microsurgery and radiation therapy. Kaplan-Meier survival curves, life-tables, and the log-rank statistic were reported to determine if there was a difference between the two treatment groups and their disease-specific survival, disease-free survival, and total laryngectomy-free survival. Additionally, each different survival analysis was stratified by potential confounding variables, to help conclude which treatment is more efficacious in this population. The 5-year disease-specific survival rate is 93.3 % σ = 0.063 and 90.8 % σ = 0.056 for patients treated with trans-oral laser microsurgery and radiation therapy, respectively (χ (2) < 0.001, p = 0.983). The disease free survival rate is 60.0 % (σ =0.121) for patients treated with trans-oral laser microsurgery, and 67.2 % (σ = 0.074) for those who received RT (χ (2) = 0.19, p = 0.663). Additionally, the total laryngectomy-free survival rate is 84.1 % (σ = 0.1) and 79.1 % (σ = 0.072) for patients' early glottic cancer treated by trans-oral laser microsurgery and radiation therapy, respectively (χ (2) = 0.235, p = 0.628). Chi-square analysis of age-group versus treatment group (χ (2) = 6.455, p = 0.04) and T-stage versus treatment group (χ (2) = 11.3, p = 0.001) revealed a statistically significant relationship, suggesting survival analysis should be stratified by these variables. However, after stratification, there was no statistically significant difference between the trans-oral laser microsurgery and radiation therapy groups in any of the survival analyses. No difference was demonstrated in the 5-year disease-specific survival, disease-free survival, and total laryngectomy-free survival, between the RT and TLM treatment groups. Additionally, both groups showed similar 5-year survival after stratifying by confounding variables.

  3. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  4. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  5. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  6. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  7. Effective groundwater model calibration: With analysis of data, sensitivities, predictions, and uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2007-01-01

    Methods and guidelines for developing and using mathematical modelsTurn to Effective Groundwater Model Calibration for a set of methods and guidelines that can help produce more accurate and transparent mathematical models. The models can represent groundwater flow and transport and other natural and engineered systems. Use this book and its extensive exercises to learn methods to fully exploit the data on hand, maximize the model's potential, and troubleshoot any problems that arise. Use the methods to perform:Sensitivity analysis to evaluate the information content of dataData assessment to identify (a) existing measurements that dominate model development and predictions and (b) potential measurements likely to improve the reliability of predictionsCalibration to develop models that are consistent with the data in an optimal mannerUncertainty evaluation to quantify and communicate errors in simulated results that are often used to make important societal decisionsMost of the methods are based on linear and nonlinear regression theory.Fourteen guidelines show the reader how to use the methods advantageously in practical situations.Exercises focus on a groundwater flow system and management problem, enabling readers to apply all the methods presented in the text. The exercises can be completed using the material provided in the book, or as hands-on computer exercises using instructions and files available on the text's accompanying Web site.Throughout the book, the authors stress the need for valid statistical concepts and easily understood presentation methods required to achieve well-tested, transparent models. Most of the examples and all of the exercises focus on simulating groundwater systems; other examples come from surface-water hydrology and geophysics.The methods and guidelines in the text are broadly applicable and can be used by students, researchers, and engineers to simulate many kinds systems.

  8. Vibration modelling and verifications for whole aero-engine

    NASA Astrophysics Data System (ADS)

    Chen, G.

    2015-08-01

    In this study, a new rotor-ball-bearing-casing coupling dynamic model for a practical aero-engine is established. In the coupling system, the rotor and casing systems are modelled using the finite element method, support systems are modelled as lumped parameter models, nonlinear factors of ball bearings and faults are included, and four types of supports and connection models are defined to model the complex rotor-support-casing coupling system of the aero-engine. A new numerical integral method that combines the Newmark-β method and the improved Newmark-β method (Zhai method) is used to obtain the system responses. Finally, the new model is verified in three ways: (1) modal experiment based on rotor-ball bearing rig, (2) modal experiment based on rotor-ball-bearing-casing rig, and (3) fault simulations for a certain type of missile turbofan aero-engine vibration. The results show that the proposed model can not only simulate the natural vibration characteristics of the whole aero-engine but also effectively perform nonlinear dynamic simulations of a whole aero-engine with faults.

  9. An Automated Method for Landmark Identification and Finite-Element Modeling of the Lumbar Spine.

    PubMed

    Campbell, Julius Quinn; Petrella, Anthony J

    2015-11-01

    The purpose of this study was to develop a method for the automated creation of finite-element models of the lumbar spine. Custom scripts were written to extract bone landmarks of lumbar vertebrae and assemble L1-L5 finite-element models. End-plate borders, ligament attachment points, and facet surfaces were identified. Landmarks were identified to maintain mesh correspondence between meshes for later use in statistical shape modeling. 90 lumbar vertebrae were processed creating 18 subject-specific finite-element models. Finite-element model surfaces and ligament attachment points were reproduced within 1e-5 mm of the bone surface, including the critical contact surfaces of the facets. Element quality exceeded specifications in 97% of elements for the 18 models created. The current method is capable of producing subject-specific finite-element models of the lumbar spine with good accuracy, quality, and robustness. The automated methods developed represent advancement in the state of the art of subject-specific lumbar spine modeling to a scale not possible with prior manual and semiautomated methods.

  10. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  11. Nonhuman primate models of focal cerebral ischemia

    PubMed Central

    Fan, Jingjing; Li, Yi; Fu, Xinyu; Li, Lijuan; Hao, Xiaoting; Li, Shasha

    2017-01-01

    Rodents have been widely used in the production of cerebral ischemia models. However, successful therapies have been proven on experimental rodent stroke model, and they have often failed to be effective when tested clinically. Therefore, nonhuman primates were recommended as the ideal alternatives, owing to their similarities with the human cerebrovascular system, brain metabolism, grey to white matter ratio and even their rich behavioral repertoire. The present review is a thorough summary of ten methods that establish nonhuman primate models of focal cerebral ischemia; electrocoagulation, endothelin-1-induced occlusion, microvascular clip occlusion, autologous blood clot embolization, balloon inflation, microcatheter embolization, coil embolization, surgical suture embolization, suture, and photochemical induction methods. This review addresses the advantages and disadvantages of each method, as well as precautions for each model, compared nonhuman primates with rodents, different species of nonhuman primates and different modeling methods. Finally it discusses various factors that need to be considered when modelling and the method of evaluation after modelling. These are critical for understanding their respective strengths and weaknesses and underlie the selection of the optimum model. PMID:28400817

  12. Multi-body modeling method for rollover using MADYMO

    NASA Astrophysics Data System (ADS)

    Liu, Changye; Lin, Zhigui; Lv, Juncheng; Luo, Qinyue; Qin, Zhenyao; Zhang, Pu; Chen, Tao

    2017-04-01

    Rollovers are complex road accidents causing a big deal of fatalities. FE model for rollover study will costtoo much time due to its long duration.A new multi-body modeling method is proposed in this paper which can save a lot of time and has high-fidelity meanwhile. Following works were carried out to validate this new method. First, a small van was tested following the FMVSS 208 protocol for the validation of the proposed modeling method. Second, a MADYMO model of this small van was reconstructed. The vehicle body was divided into two main parts, the deformable upper body and the rigid lower body, modeled by different waysbased on an FE model. The specific method of modeling is offered in this paper. Finally, the trajectories of the vehicle from test and simulation were comparedand the match was very good. Acceleration of left B pillar was taken into consideration, which turned out fitting the test result well in the time of event. The final deformation status of the vehicle in test and simulation showed similar trend. This validated model provides a reliable wayfor further research in occupant injuries during rollovers.

  13. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  14. A method for analyzing clustered interval-censored data based on Cox's model.

    PubMed

    Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau

    2013-02-28

    Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less

  16. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  17. Physiological motion modeling for organ-mounted robots.

    PubMed

    Wood, Nathan A; Schwartzman, David; Zenati, Marco A; Riviere, Cameron N

    2017-12-01

    Organ-mounted robots passively compensate heartbeat and respiratory motion. In model-guided procedures, this motion can be a significant source of information that can be used to aid in localization or to add dynamic information to static preoperative maps. Models for estimating periodic motion are proposed for both position and orientation. These models are then tested on animal data and optimal orders are identified. Finally, methods for online identification are demonstrated. Models using exponential coordinates and Euler-angle parameterizations are as accurate as models using quaternion representations, yet require a quarter fewer parameters. Models which incorporate more than four cardiac or three respiration harmonics are no more accurate. Finally, online methods estimate model parameters as accurately as offline methods within three respiration cycles. These methods provide a complete framework for accurately modelling the periodic deformation of points anywhere on the surface of the heart in a closed chest. Copyright © 2017 John Wiley & Sons, Ltd.

  18. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  19. Tensor-GMRES method for large sparse systems of nonlinear equations

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  20. Contact angle adjustment in equation-of-state-based pseudopotential model.

    PubMed

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  1. Contact angle adjustment in equation-of-state-based pseudopotential model

    NASA Astrophysics Data System (ADS)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  2. Quantitative comparison of alternative methods for coarse-graining biological networks

    PubMed Central

    Bowman, Gregory R.; Meng, Luming; Huang, Xuhui

    2013-01-01

    Markov models and master equations are a powerful means of modeling dynamic processes like protein conformational changes. However, these models are often difficult to understand because of the enormous number of components and connections between them. Therefore, a variety of methods have been developed to facilitate understanding by coarse-graining these complex models. Here, we employ Bayesian model comparison to determine which of these coarse-graining methods provides the models that are most faithful to the original set of states. We find that the Bayesian agglomerative clustering engine and the hierarchical Nyström expansion graph (HNEG) typically provide the best performance. Surprisingly, the original Perron cluster cluster analysis (PCCA) method often provides the next best results, outperforming the newer PCCA+ method and the most probable paths algorithm. We also show that the differences between the models are qualitatively significant, rather than being minor shifts in the boundaries between states. The performance of the methods correlates well with the entropy of the resulting coarse-grainings, suggesting that finding states with more similar populations (i.e., avoiding low population states that may just be noise) gives better results. PMID:24089717

  3. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  4. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  5. Bias correction of temperature produced by the Community Climate System Model using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Moghim, S.; Hsu, K.; Bras, R. L.

    2013-12-01

    General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.

  6. Using conceptual work products of health care to design health IT.

    PubMed

    Berry, Andrew B L; Butler, Keith A; Harrington, Craig; Braxton, Melissa O; Walker, Amy J; Pete, Nikki; Johnson, Trevor; Oberle, Mark W; Haselkorn, Jodie; Paul Nichol, W; Haselkorn, Mark

    2016-02-01

    This paper introduces a new, model-based design method for interactive health information technology (IT) systems. This method extends workflow models with models of conceptual work products. When the health care work being modeled is substantially cognitive, tacit, and complex in nature, graphical workflow models can become too complex to be useful to designers. Conceptual models complement and simplify workflows by providing an explicit specification for the information product they must produce. We illustrate how conceptual work products can be modeled using standard software modeling language, which allows them to provide fundamental requirements for what the workflow must accomplish and the information that a new system should provide. Developers can use these specifications to envision how health IT could enable an effective cognitive strategy as a workflow with precise information requirements. We illustrate the new method with a study conducted in an outpatient multiple sclerosis (MS) clinic. This study shows specifically how the different phases of the method can be carried out, how the method allows for iteration across phases, and how the method generated a health IT design for case management of MS that is efficient and easy to use. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Nakagami-based total variation method for speckle reduction in thyroid ultrasound images.

    PubMed

    Koundal, Deepika; Gupta, Savita; Singh, Sukhwinder

    2016-02-01

    A good statistical model is necessary for the reduction in speckle noise. The Nakagami model is more general than the Rayleigh distribution for statistical modeling of speckle in ultrasound images. In this article, the Nakagami-based noise removal method is presented to enhance thyroid ultrasound images and to improve clinical diagnosis. The statistics of log-compressed image are derived from the Nakagami distribution following a maximum a posteriori estimation framework. The minimization problem is solved by optimizing an augmented Lagrange and Chambolle's projection method. The proposed method is evaluated on both artificial speckle-simulated and real ultrasound images. The experimental findings reveal the superiority of the proposed method both quantitatively and qualitatively in comparison with other speckle reduction methods reported in the literature. The proposed method yields an average signal-to-noise ratio gain of more than 2.16 dB over the non-convex regularizer-based speckle noise removal method, 3.83 dB over the Aubert-Aujol model, 1.71 dB over the Shi-Osher model and 3.21 dB over the Rudin-Lions-Osher model on speckle-simulated synthetic images. Furthermore, visual evaluation of the despeckled images shows that the proposed method suppresses speckle noise well while preserving the textures and fine details. © IMechE 2015.

  8. The improved business valuation model for RFID company based on the community mining method.

    PubMed

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.

  9. The improved business valuation model for RFID company based on the community mining method

    PubMed Central

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company’s net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies. PMID:28459815

  10. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  11. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  12. Further evidence for the increased power of LOD scores compared with nonparametric methods.

    PubMed

    Durner, M; Vieland, V J; Greenberg, D A

    1999-01-01

    In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.

  13. Regional models of the gravity field from terrestrial gravity data of heterogeneous quality and density

    NASA Astrophysics Data System (ADS)

    Talvik, Silja; Oja, Tõnis; Ellmann, Artu; Jürgenson, Harli

    2014-05-01

    Gravity field models in a regional scale are needed for a number of applications, for example national geoid computation, processing of precise levelling data and geological modelling. Thus the methods applied for modelling the gravity field from surveyed gravimetric information need to be considered carefully. The influence of using different gridding methods, the inclusion of unit or realistic weights and indirect gridding of free air anomalies (FAA) are investigated in the study. Known gridding methods such as kriging (KRIG), least squares collocation (LSCO), continuous curvature (CCUR) and optimal Delaunay triangulation (ODET) are used for production of gridded gravity field surfaces. As the quality of data collected varies considerably depending on the methods and instruments available or used in surveying it is important to somehow weigh the input data. This puts additional demands on data maintenance as accuracy information needs to be available for each data point participating in the modelling which is complicated by older gravity datasets where the uncertainties of not only gravity values but also supplementary information such as survey point position are not always known very accurately. A number of gravity field applications (e.g. geoid computation) demand foran FAA model, the acquisition of which is also investigated. Instead of direct gridding it could be more appropriate to proceed with indirect FAA modelling using a Bouguer anomaly grid to reduce the effect of topography on the resulting FAA model (e.g. near terraced landforms). The inclusion of different gridding methods, weights and indirect FAA modelling helps to improve gravity field modelling methods. It becomes possible to estimate the impact of varying methodical approaches on the gravity field modelling as statistical output is compared. Such knowledge helps assess the accuracy of gravity field models and their effect on the aforementioned applications.

  14. Multiview road sign detection via self-adaptive color model and shape context matching

    NASA Astrophysics Data System (ADS)

    Liu, Chunsheng; Chang, Faliang; Liu, Chengyun

    2016-09-01

    The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.

  15. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  16. Word sense disambiguation in the clinical domain: a comparison of knowledge-rich and knowledge-poor unsupervised methods

    PubMed Central

    Chasin, Rachel; Rumshisky, Anna; Uzuner, Ozlem; Szolovits, Peter

    2014-01-01

    Objective To evaluate state-of-the-art unsupervised methods on the word sense disambiguation (WSD) task in the clinical domain. In particular, to compare graph-based approaches relying on a clinical knowledge base with bottom-up topic-modeling-based approaches. We investigate several enhancements to the topic-modeling techniques that use domain-specific knowledge sources. Materials and methods The graph-based methods use variations of PageRank and distance-based similarity metrics, operating over the Unified Medical Language System (UMLS). Topic-modeling methods use unlabeled data from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC II) database to derive models for each ambiguous word. We investigate the impact of using different linguistic features for topic models, including UMLS-based and syntactic features. We use a sense-tagged clinical dataset from the Mayo Clinic for evaluation. Results The topic-modeling methods achieve 66.9% accuracy on a subset of the Mayo Clinic's data, while the graph-based methods only reach the 40–50% range, with a most-frequent-sense baseline of 56.5%. Features derived from the UMLS semantic type and concept hierarchies do not produce a gain over bag-of-words features in the topic models, but identifying phrases from UMLS and using syntax does help. Discussion Although topic models outperform graph-based methods, semantic features derived from the UMLS prove too noisy to improve performance beyond bag-of-words. Conclusions Topic modeling for WSD provides superior results in the clinical domain; however, integration of knowledge remains to be effectively exploited. PMID:24441986

  17. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  18. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  19. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  20. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  1. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.

  2. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different methods on a fair basis. We developed and present pySTAN (python framework for STructure Analysis), a python package containing a set of functions for model structure evaluation to provide the analysis of (hydrological) model structures. A selected set of algorithms for optimization, uncertainty and sensitivity analysis is currently available, together with a set of evaluation (objective) functions and input distributions to sample from. The methods are implemented model-independent and the python language provides the wrapper functions to apply administer external model codes. Different objective functions can be considered simultaneously with both statistical metrics and more hydrology specific metrics. By using so-called reStructuredText (sphinx documentation generator) and Python documentation strings (docstrings), the generation of manual pages is semi-automated and a specific environment is available to enhance both the readability and transparency of the code. It thereby enables a larger group of users to apply and compare these methods and to extend the functionalities.

  3. Extension of local front reconstruction method with controlled coalescence model

    NASA Astrophysics Data System (ADS)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  4. Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Bolton, Matthew L.; Bass, Ellen J.

    2009-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.

  5. Novel SHM method to locate damages in substructures based on VARX models

    NASA Astrophysics Data System (ADS)

    Ugalde, U.; Anduaga, J.; Martínez, F.; Iturrospe, A.

    2015-07-01

    A novel damage localization method is proposed, which is based on a substructuring approach and makes use of Vector Auto-Regressive with eXogenous input (VARX) models. The substructuring approach aims to divide the monitored structure into several multi-DOF isolated substructures. Later, each individual substructure is modelled as a VARX model, and the health of each substructure is determined analyzing the variation of the VARX model. The method allows to detect whether the isolated substructure is damaged, and besides allows to locate and quantify the damage within the substructure. It is not necessary to have a theoretical model of the structure and only the measured displacement data is required to estimate the isolated substructure's VARX model. The proposed method is validated by simulations of a two-dimensional lattice structure.

  6. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  7. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  8. Adaptive Modeling Procedure Selection by Data Perturbation.

    PubMed

    Zhang, Yongli; Shen, Xiaotong

    2015-10-01

    Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.

  9. A modified precise integration method for transient dynamic analysis in structural systems with multiple damping models

    NASA Astrophysics Data System (ADS)

    Ding, Zhe; Li, Li; Hu, Yujin

    2018-01-01

    Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.

  10. Deployment Simulation Methods for Ultra-Lightweight Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Johnson, Arthur R.

    2003-01-01

    Two dynamic inflation simulation methods are employed for modeling the deployment of folded thin-membrane tubes. The simulations are necessary because ground tests include gravity effects and may poorly represent deployment in space. The two simulation methods are referred to as the Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method. They are available in the LS-DYNA nonlinear dynamic finite element code. Both methods are suitable for modeling the interactions between the inflation gas and the thin-membrane tube structures. The CV method only considers the pressure induced by the inflation gas in the simulation, while the ALE method models the actual flow of the inflation gas. Thus, the transient fluid properties at any location within the tube can be predicted by the ALE method. Deployment simulations of three packaged tube models; namely coiled, Z-folded, and telescopically-folded configurations, are performed. Results predicted by both methods for the telescopically-folded configuration are correlated and computational efficiency issues are discussed.

  11. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  12. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  13. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  14. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  15. A fragmentation and reassembly method for ab initio phasing.

    PubMed

    Shrestha, Rojan; Zhang, Kam Y J

    2015-02-01

    Ab initio phasing with de novo models has become a viable approach for structural solution from protein crystallographic diffraction data. This approach takes advantage of the known protein sequence information, predicts de novo models and uses them for structure determination by molecular replacement. However, even the current state-of-the-art de novo modelling method has a limit as to the accuracy of the model predicted, which is sometimes insufficient to be used as a template for successful molecular replacement. A fragment-assembly phasing method has been developed that starts from an ensemble of low-accuracy de novo models, disassembles them into fragments, places them independently in the crystallographic unit cell by molecular replacement and then reassembles them into a whole structure that can provide sufficient phase information to enable complete structure determination by automated model building. Tests on ten protein targets showed that the method could solve structures for eight of these targets, although the predicted de novo models cannot be used as templates for successful molecular replacement since the best model for each target is on average more than 4.0 Å away from the native structure. The method has extended the applicability of the ab initio phasing by de novo models approach. The method can be used to solve structures when the best de novo models are still of low accuracy.

  16. Analysis about modeling MEC7000 excitation system of nuclear power unit

    NASA Astrophysics Data System (ADS)

    Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming

    2018-02-01

    Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.

  17. Construction of mathematical model for measuring material concentration by colorimetric method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Gao, Lingceng; Yu, Kairong; Tan, Xianghua

    2018-06-01

    This paper use the method of multiple linear regression to discuss the data of C problem of mathematical modeling in 2017. First, we have established a regression model for the concentration of 5 substances. But only the regression model of the substance concentration of urea in milk can pass through the significance test. The regression model established by the second sets of data can pass the significance test. But this model exists serious multicollinearity. We have improved the model by principal component analysis. The improved model is used to control the system so that it is possible to measure the concentration of material by direct colorimetric method.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  19. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.

  20. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  1. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  2. Hybrid ODE/SSA methods and the cell cycle model

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, M.; Cao, Y.

    2017-07-01

    Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.

  3. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  4. Atomistic insight into the catalytic mechanism of glycosyltransferases by combined quantum mechanics/molecular mechanics (QM/MM) methods.

    PubMed

    Tvaroška, Igor

    2015-02-11

    Glycosyltransferases catalyze the formation of glycosidic bonds by assisting the transfer of a sugar residue from donors to specific acceptor molecules. Although structural and kinetic data have provided insight into mechanistic strategies employed by these enzymes, molecular modeling studies are essential for the understanding of glycosyltransferase catalyzed reactions at the atomistic level. For such modeling, combined quantum mechanics/molecular mechanics (QM/MM) methods have emerged as crucial. These methods allow the modeling of enzymatic reactions by using quantum mechanical methods for the calculation of the electronic structure of the active site models and treating the remaining enzyme environment by faster molecular mechanics methods. Herein, the application of QM/MM methods to glycosyltransferase catalyzed reactions is reviewed, and the insight from modeling of glycosyl transfer into the mechanisms and transition states structures of both inverting and retaining glycosyltransferases are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  6. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  7. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  8. The development of advanced manufacturing systems

    NASA Astrophysics Data System (ADS)

    Doumeingts, Guy; Vallespir, Bruno; Darricau, Didier; Roboam, Michel

    Various methods for the design of advanced manufacturing systems (AMSs) are reviewed. The specifications for AMSs and problems inherent in their development are first discussed. Three models, the Computer Aided Manufacturing-International model, the National Bureau of Standards model, and the GRAI model, are considered in detail. Hierarchical modeling tools such as structured analysis and design techniques, Petri nets, and the Icam definition method are used in the development of integrated manufacturing models. Finally, the GRAI method is demonstrated in the design of specifications for the production management system of the Snecma AMS.

  9. Dynamical downscaling inter-comparison for high resolution climate reconstruction

    NASA Astrophysics Data System (ADS)

    Ferreira, J.; Rocha, A.; Castanheira, J. M.; Carvalho, A. C.

    2012-04-01

    In the scope of the project: "High-resolution Rainfall EroSivity analysis and fORecasTing - RESORT", an evaluation of various methods of dynamic downscaling is presented. The methods evaluated range from the classic method of nesting a regional model results in a global model, in this case the ECMWF reanalysis, to more recently proposed methods, which consist in using Newtonian relaxation methods in order to nudge the results of the regional model to the reanalysis. The method with better results involves using a system of variational data assimilation to incorporate observational data with results from the regional model. The climatology of a simulation of 5 years using this method is tested against observations on mainland Portugal and the ocean in the area of the Portuguese Continental Shelf, which shows that the method developed is suitable for the reconstruction of high resolution climate over continental Portugal.

  10. Global/local stress analysis of composite panels

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Knight, Norman F., Jr.

    1989-01-01

    A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  11. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  12. Global/local stress analysis of composite structures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    1989-01-01

    A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  13. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  14. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  15. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    PubMed Central

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  17. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    PubMed

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    PubMed

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  19. Complexity reduction of biochemical rate expressions.

    PubMed

    Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar

    2008-03-15

    The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.

  20. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    PubMed

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  1. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  2. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  3. Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method

    NASA Astrophysics Data System (ADS)

    Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun

    2017-10-01

    Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.

  4. Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels

    NASA Astrophysics Data System (ADS)

    Xu, Kun; Thomas, Brian G.

    2012-03-01

    The formation, growth, and size distribution of precipitates greatly affects the microstructure and properties of microalloyed steels. Computational particle-size-grouping (PSG) kinetic models based on population balances are developed to simulate precipitate particle growth resulting from collision and diffusion mechanisms. First, the generalized PSG method for collision is explained clearly and verified. Then, a new PSG method is proposed to model diffusion-controlled precipitate nucleation, growth, and coarsening with complete mass conservation and no fitting parameters. Compared with the original population-balance models, this PSG method saves significant computation and preserves enough accuracy to model a realistic range of particle sizes. Finally, the new PSG method is combined with an equilibrium phase fraction model for plain carbon steels and is applied to simulate the precipitated fraction of aluminum nitride and the size distribution of niobium carbide during isothermal aging processes. Good matches are found with experimental measurements, suggesting that the new PSG method offers a promising framework for the future development of realistic models of precipitation.

  5. Design and Analysis Tools for Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Folk, Thomas C.

    2009-01-01

    Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.

  6. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    NASA Astrophysics Data System (ADS)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  7. [Application of three risk assessment models in occupational health risk assessment of dimethylformamide].

    PubMed

    Wu, Z J; Xu, B; Jiang, H; Zheng, M; Zhang, M; Zhao, W J; Cheng, J

    2016-08-20

    Objective: To investigate the application of United States Environmental Protection Agency (EPA) inhalation risk assessment model, Singapore semi-quantitative risk assessment model, and occupational hazards risk assessment index method in occupational health risk in enterprises using dimethylformamide (DMF) in a certain area in Jiangsu, China, and to put forward related risk control measures. Methods: The industries involving DMF exposure in Jiangsu province were chosen as the evaluation objects in 2013 and three risk assessment models were used in the evaluation. EPA inhalation risk assessment model: HQ=EC/RfC; Singapore semi-quantitative risk assessment model: Risk= (HR×ER) 1/2 ; Occupational hazards risk assessment index=2 Health effect level ×2 exposure ratio ×Operation condition level. Results: The results of hazard quotient (HQ>1) from EPA inhalation risk assessment model suggested that all the workshops (dry method, wet method and printing) and work positions (pasting, burdening, unreeling, rolling, assisting) were high risk. The results of Singapore semi-quantitative risk assessment model indicated that the workshop risk level of dry method, wet method and printing were 3.5 (high) , 3.5 (high) and 2.8 (general) , and position risk level of pasting, burdening, unreeling, rolling, assisting were 4 (high) , 4 (high) , 2.8 (general) , 2.8 (general) and 2.8 (general) . The results of occupational hazards risk assessment index method demonstrated that the position risk index of pasting, burdening, unreeling, rolling, assisting were 42 (high) , 33 (high) , 23 (middle) , 21 (middle) and 22 (middle) . The results of Singapore semi-quantitative risk assessment model and occupational hazards risk assessment index method were similar, while EPA inhalation risk assessment model indicated all the workshops and positions were high risk. Conclusion: The occupational hazards risk assessment index method fully considers health effects, exposure, and operating conditions and can comprehensively and accurately evaluate occupational health risk caused by DMF.

  8. Modeling influenza-like illnesses through composite compartmental models

    NASA Astrophysics Data System (ADS)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  9. Improved model quality assessment using ProQ2.

    PubMed

    Ray, Arjun; Lindahl, Erik; Wallner, Björn

    2012-09-10

    Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org.

  10. A global/local analysis method for treating details in structural design

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.

    1993-01-01

    A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.

  11. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm

    PubMed Central

    Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen

    2016-01-01

    Motivation Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. Method In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. Results We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset. PMID:27014873

  12. An Integrated Fuselage-Sting Balance for a Sonic-Boom Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2004-01-01

    Measured and predicted pressure signatures from a lifting wind-tunnel model can be compared when the lift on the model is accurately known. The model's lift can be set by bending the support sting to a desired angle of attack. This method is simple in practice, but difficult to accurately apply. A second method is to build a normal force/pitching moment balance into the aft end of the sting, and use an angle-of-attack mechanism to set model attitude. In this report, a method for designing a sting/balance into the aft fuselage/sting of a sonic-boom model is described. A computer code is given, and a sample sting design is outlined to demonstrate the method.

  13. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  14. MetaMQAP: a meta-server for the quality assessment of protein models.

    PubMed

    Pawlowski, Marcin; Gajda, Michal J; Matlak, Ryszard; Bujnicki, Janusz M

    2008-09-29

    Computational models of protein structure are usually inaccurate and exhibit significant deviations from the true structure. The utility of models depends on the degree of these deviations. A number of predictive methods have been developed to discriminate between the globally incorrect and approximately correct models. However, only a few methods predict correctness of different parts of computational models. Several Model Quality Assessment Programs (MQAPs) have been developed to detect local inaccuracies in unrefined crystallographic models, but it is not known if they are useful for computational models, which usually exhibit different and much more severe errors. The ability to identify local errors in models was tested for eight MQAPs: VERIFY3D, PROSA, BALA, ANOLEA, PROVE, TUNE, REFINER, PROQRES on 8251 models from the CASP-5 and CASP-6 experiments, by calculating the Spearman's rank correlation coefficients between per-residue scores of these methods and local deviations between C-alpha atoms in the models vs. experimental structures. As a reference, we calculated the value of correlation between the local deviations and trivial features that can be calculated for each residue directly from the models, i.e. solvent accessibility, depth in the structure, and the number of local and non-local neighbours. We found that absolute correlations of scores returned by the MQAPs and local deviations were poor for all methods. In addition, scores of PROQRES and several other MQAPs strongly correlate with 'trivial' features. Therefore, we developed MetaMQAP, a meta-predictor based on a multivariate regression model, which uses scores of the above-mentioned methods, but in which trivial parameters are controlled. MetaMQAP predicts the absolute deviation (in Angströms) of individual C-alpha atoms between the model and the unknown true structure as well as global deviations (expressed as root mean square deviation and GDT_TS scores). Local model accuracy predicted by MetaMQAP shows an impressive correlation coefficient of 0.7 with true deviations from native structures, a significant improvement over all constituent primary MQAP scores. The global MetaMQAP score is correlated with model GDT_TS on the level of 0.89. Finally, we compared our method with the MQAPs that scored best in the 7th edition of CASP, using CASP7 server models (not included in the MetaMQAP training set) as the test data. In our benchmark, MetaMQAP is outperformed only by PCONS6 and method QA_556 - methods that require comparison of multiple alternative models and score each of them depending on its similarity to other models. MetaMQAP is however the best among methods capable of evaluating just single models. We implemented the MetaMQAP as a web server available for free use by all academic users at the URL https://genesilico.pl/toolkit/

  15. Multi-fidelity stochastic collocation method for computation of statistical moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu

    We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.

  16. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  17. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  18. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  19. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation

    NASA Astrophysics Data System (ADS)

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  20. Evapotranspiration Calculations for an Alpine Marsh Meadow Site in Three-river Headwater Region

    NASA Astrophysics Data System (ADS)

    Zhou, B.; Xiao, H.

    2016-12-01

    Daily radiation and meteorological data were collected at an alpine marsh meadow site in the Three-river Headwater Region(THR). Use them to assess radiation models determined after comparing the performance between Zuo model and the model recommend by FAO56P-M.Four methods, FAO56P-M, Priestley-Taylor, Hargreaves, and Makkink methods were applied to determine daily reference evapotranspiration( ETr) for the growing season and built the empirical models for estimating daily actual evapotranspiration ETa between ETr derived from the four methods and evapotranspiration derived from Bowen Ratio method on alpine marsh meadow in this region. After comparing the performance of four empirical models by RMSE, MAE and AI, it showed these models all can get the better estimated daily ETaon alpine marsh meadow in this region, and the best performance of the FAO56 P-M, Makkink empirical model were better than Priestley-Taylor and Hargreaves model.

Top