Sample records for simplified analysis method

  1. Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method

    NASA Astrophysics Data System (ADS)

    Sun, C. J.; Zhou, J. H.; Wu, W.

    2017-10-01

    During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.

  2. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  3. A novel implementation of homodyne time interval analysis method for primary vibration calibration

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo

    2011-12-01

    In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.

  4. Development, verification, and application of a simplified method to estimate total-streambed scour at bridge sites in Illinois

    USGS Publications Warehouse

    Holmes, Robert R.; Dunn, Chad J.

    1996-01-01

    A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.

  5. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  6. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  7. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  8. Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.

    ERIC Educational Resources Information Center

    Kim, YoungHwan; Reigluth, Charles M.

    The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…

  9. A simplified and efficient method for the analysis of fatty acid methyl esters suitable for large clinical studies.

    PubMed

    Masood, Athar; Stark, Ken D; Salem, Norman

    2005-10-01

    Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.

  10. Simplified method for the transverse bending analysis of twin celled concrete box girder bridges

    NASA Astrophysics Data System (ADS)

    Chithra, J.; Nagarajan, Praveen; S, Sajith A.

    2018-03-01

    Box girder bridges are one of the best options for bridges with span more than 25 m. For the study of these bridges, three-dimensional finite element analysis is the best suited method. However, performing three-dimensional analysis for routine design is difficult as well as time consuming. Also, software used for the three-dimensional analysis are very expensive. Hence designers resort to simplified analysis for predicting longitudinal and transverse bending moments. Among the many analytical methods used to find the transverse bending moments, SFA is the simplest and widely used in design offices. Results from simplified frame analysis can be used for the preliminary analysis of the concrete box girder bridges.From the review of literatures, it is found that majority of the work done using SFA is restricted to the analysis of single cell box girder bridges. Not much work has been done on the analysis multi-cell concrete box girder bridges. In this present study, a double cell concrete box girder bridge is chosen. The bridge is modelled using three- dimensional finite element software and the results are then compared with the simplified frame analysis. The study mainly focuses on establishing correction factors for transverse bending moment values obtained from SFA.

  11. Simplified half-life methods for the analysis of kinetic data

    NASA Technical Reports Server (NTRS)

    Eberhart, J. G.; Levin, E.

    1988-01-01

    The analysis of reaction rate data has as its goal the determination of the order rate constant which characterize the data. Chemical reactions with one reactant and present simplified methods for accomplishing this goal are considered. The approaches presented involve the use of half lives or other fractional lives. These methods are particularly useful for the more elementary discussions of kinetics found in general and physical chemistry courses.

  12. Failure mode and effects analysis: a comparison of two common risk prioritisation methods.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L

    2016-05-01

    Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.

    ERIC Educational Resources Information Center

    Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.

    This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…

  14. Evaluation of a simplified gross thrust calculation method for a J85-21 afterburning turbojet engine in an altitude facility

    NASA Technical Reports Server (NTRS)

    Baer-Riedhart, J. L.

    1982-01-01

    A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.

  15. Simple design of slanted grating with simplified modal method.

    PubMed

    Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun

    2014-02-15

    A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.

  16. Dynamic characteristics and simplified numerical methods of an all-vertical-piled wharf in offshore deep water

    NASA Astrophysics Data System (ADS)

    Zhang, Hua-qing; Sun, Xi-ping; Wang, Yuan-zhan; Yin, Ji-long; Wang, Chao-yang

    2015-10-01

    There has been a growing trend in the development of offshore deep-water ports in China. For such deep sea projects, all-vertical-piled wharves are suitable structures and generally located in open waters, greatly affected by wave action. Currently, no systematic studies or simplified numerical methods are available for deriving the dynamic characteristics and dynamic responses of all-vertical-piled wharves under wave cyclic loads. In this article, we compare the dynamic characteristics of an all-vertical-piled wharf with those of a traditional inshore high-piled wharf through numerical analysis; our research reveals that the vibration period of an all-vertical-piled wharf under cyclic loading is longer than that of an inshore high-piled wharf and is much closer to the period of the loading wave. Therefore, dynamic calculation and analysis should be conducted when designing and calculating the characteristics of an all-vertical-piled wharf. We establish a dynamic finite element model to examine the dynamic response of an all-vertical-piled wharf under wave cyclic loads and compare the results with those under wave equivalent static load; the comparison indicates that dynamic amplification of the structure is evident when the wave dynamic load effect is taken into account. Furthermore, a simplified dynamic numerical method for calculating the dynamic response of an all-vertical-piled wharf is established based on the P-Y curve. Compared with finite element analysis, the simplified method is more convenient to use and applicable to large structural deformation while considering the soil non-linearity. We confirmed that the simplified method has acceptable accuracy and can be used in engineering applications.

  17. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.

  18. Improved Simplified Methods for Effective Seismic Analysis and Design of Isolated and Damped Bridges in Western and Eastern North America

    NASA Astrophysics Data System (ADS)

    Koval, Viacheslav

    The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.

  19. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  20. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  1. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    USGS Publications Warehouse

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  2. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  3. Simplified Dynamic Analysis of Grinders Spindle Node

    NASA Astrophysics Data System (ADS)

    Demec, Peter

    2014-12-01

    The contribution deals with the simplified dynamic analysis of surface grinding machine spindle node. Dynamic analysis is based on the use of the transfer matrix method, which is essentially a matrix form of method of initial parameters. The advantage of the described method, despite the seemingly complex mathematical apparatus, is primarily, that it does not require for solve the problem of costly commercial software using finite element method. All calculations can be made for example in MS Excel, which is advantageous especially in the initial stages of constructing of spindle node for the rapid assessment of the suitability its design. After detailing the entire structure of spindle node is then also necessary to perform the refined dynamic analysis in the environment of FEM, which it requires the necessary skills and experience and it is therefore economically difficult. This work was developed within grant project KEGA No. 023TUKE-4/2012 Creation of a comprehensive educational - teaching material for the article Production technique using a combination of traditional and modern information technology and e-learning.

  4. Simplified analytical model and balanced design approach for light-weight wood-based structural panel in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2016-01-01

    This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...

  5. Determination of the Shear Stress Distribution in a Laminate from the Applied Shear Resultant--A Simplified Shear Solution

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Yarrington, Phillip W.

    2007-01-01

    The simplified shear solution method is presented for approximating the through-thickness shear stress distribution within a composite laminate based on laminated beam theory. The method does not consider the solution of a particular boundary value problem, rather it requires only knowledge of the global shear loading, geometry, and material properties of the laminate or panel. It is thus analogous to lamination theory in that ply level stresses can be efficiently determined from global load resultants (as determined, for instance, by finite element analysis) at a given location in a structure and used to evaluate the margin of safety on a ply by ply basis. The simplified shear solution stress distribution is zero at free surfaces, continuous at ply boundaries, and integrates to the applied shear load. Comparisons to existing theories are made for a variety of laminates, and design examples are provided illustrating the use of the method for determining through-thickness shear stress margins in several types of composite panels and in the context of a finite element structural analysis.

  6. Iron Analysis by Redox Titration. A General Chemistry Experiment.

    ERIC Educational Resources Information Center

    Kaufman, Samuel; DeVoe, Howard

    1988-01-01

    Describes a simplified redox method for total iron analysis suitable for execution in a three-hour laboratory period by general chemistry students. Discusses materials, procedures, analyses, and student performance. (CW)

  7. Simplified and refined finite element approaches for determining stresses and internal forces in geometrically nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Robinson, J. C.

    1979-01-01

    Two methods for determining stresses and internal forces in geometrically nonlinear structural analysis are presented. The simplified approach uses the mid-deformed structural position to evaluate strains when rigid body rotation is present. The important feature of this approach is that it can easily be used with a general-purpose finite-element computer program. The refined approach uses element intrinsic or corotational coordinates and a geometric transformation to determine element strains from joint displacements. Results are presented which demonstrate the capabilities of these potentially useful approaches for geometrically nonlinear structural analysis.

  8. 77 FR 54482 - Allocation of Costs Under the Simplified Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...

  9. Correction for the Hematocrit Bias in Dried Blood Spot Analysis Using a Nondestructive, Single-Wavelength Reflectance-Based Hematocrit Prediction Method.

    PubMed

    Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P

    2018-02-06

    The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.

  10. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    USGS Publications Warehouse

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  11. A simplified dynamic model of the T700 turboshaft engine

    NASA Technical Reports Server (NTRS)

    Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.

    1992-01-01

    A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.

  12. Analysis of temperature distribution in liquid-cooled turbine blades

    NASA Technical Reports Server (NTRS)

    Livingood, John N B; Brown, W Byron

    1952-01-01

    The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.

  13. [The principal components analysis--method to classify the statistical variables with applications in medicine].

    PubMed

    Dascălu, Cristina Gena; Antohe, Magda Ecaterina

    2009-01-01

    Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.

  14. Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.

    PubMed

    Summers, A E

    2000-01-01

    ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.

  15. GoPros™ as an underwater photogrammetry tool for citizen science

    PubMed Central

    David, Peter A.; Dupont, Sally F.; Mathewson, Ciaran P.; O’Neill, Samuel J.; Powell, Nicholas N.; Williamson, Jane E.

    2016-01-01

    Citizen science can increase the scope of research in the marine environment; however, it suffers from necessitating specialized training and simplified methodologies that reduce research output. This paper presents a simplified, novel survey methodology for citizen scientists, which combines GoPro imagery and structure from motion to construct an ortho-corrected 3D model of habitats for analysis. Results using a coral reef habitat were compared to surveys conducted with traditional snorkelling methods for benthic cover, holothurian counts, and coral health. Results were comparable between the two methods, and structure from motion allows the results to be analysed off-site for any chosen visual analysis. The GoPro method outlined in this study is thus an effective tool for citizen science in the marine environment, especially for comparing changes in coral cover or volume over time. PMID:27168973

  16. GoPros™ as an underwater photogrammetry tool for citizen science.

    PubMed

    Raoult, Vincent; David, Peter A; Dupont, Sally F; Mathewson, Ciaran P; O'Neill, Samuel J; Powell, Nicholas N; Williamson, Jane E

    2016-01-01

    Citizen science can increase the scope of research in the marine environment; however, it suffers from necessitating specialized training and simplified methodologies that reduce research output. This paper presents a simplified, novel survey methodology for citizen scientists, which combines GoPro imagery and structure from motion to construct an ortho-corrected 3D model of habitats for analysis. Results using a coral reef habitat were compared to surveys conducted with traditional snorkelling methods for benthic cover, holothurian counts, and coral health. Results were comparable between the two methods, and structure from motion allows the results to be analysed off-site for any chosen visual analysis. The GoPro method outlined in this study is thus an effective tool for citizen science in the marine environment, especially for comparing changes in coral cover or volume over time.

  17. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  18. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  19. Can matrix solid phase dispersion (MSPD) be more simplified? Application of solventless MSPD sample preparation method for GC-MS and GC-FID analysis of plant essential oil components.

    PubMed

    Wianowska, Dorota; Dawidowicz, Andrzej L

    2016-05-01

    This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A simplified competition data analysis for radioligand specific activity determination.

    PubMed

    Venturino, A; Rivera, E S; Bergoc, R M; Caro, R A

    1990-01-01

    Non-linear regression and two-step linear fit methods were developed to determine the actual specific activity of 125I-ovine prolactin by radioreceptor self-displacement analysis. The experimental results obtained by the different methods are superposable. The non-linear regression method is considered to be the most adequate procedure to calculate the specific activity, but if its software is not available, the other described methods are also suitable.

  1. Simplified methods for evaluating road prism stability

    Treesearch

    William J. Elliot; Mark Ballerini; David Hall

    2003-01-01

    Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...

  2. Simplified web-based decision support method for traffic management and work zone analysis.

    DOT National Transportation Integrated Search

    2017-01-01

    Traffic congestion mitigation is one of the key challenges that transportation planners and operations engineers face when planning for construction and maintenance activities. There is a wide variety of approaches and methods that address work zone ...

  3. Simplified web-based decision support method for traffic management and work zone analysis.

    DOT National Transportation Integrated Search

    2015-06-01

    Traffic congestion mitigation is one of the key challenges that transportation planners and operations engineers face when : planning for construction and maintenance activities. There is a wide variety of approaches and methods that address work : z...

  4. Simplification of the DPPH assay for estimating the antioxidant activity of wine and wine by-products.

    PubMed

    Carmona-Jiménez, Yolanda; García-Moreno, M Valme; Igartuburu, Jose M; Garcia Barroso, Carmelo

    2014-12-15

    The DPPH assay is one of the most commonly employed methods for measuring antioxidant activity. Even though this method is considered very simple and efficient, it does present various limitations which make it complicated to perform. The range of linearity between the DPPH inhibition percentage and sample concentration has been studied with a view to simplifying the method for characterising samples of wine origin. It has been concluded that all the samples are linear in a range of inhibition below 40%, which allows the analysis to be simplified. A new parameter more appropriate for the simplification, the EC20, has been proposed to express the assay results. Additionally, the reaction time was analysed with the object of avoiding the need for kinetic studies in the method. The simplifications considered offer a more functional method, without significant errors, which could be used for routine analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  6. A Study of Water Pollution

    ERIC Educational Resources Information Center

    Sarkis, Vahak D.

    1974-01-01

    Describes a method (involving a Hach Colorimeter and simplified procedures) that can be used for the analysis of up to 56 different chemical constituents of water. Presents the results of student analysis of waters of Fulton and Montgomery counties in New York. (GS)

  7. Simplified and refined structural modeling for economical flutter analysis and design

    NASA Technical Reports Server (NTRS)

    Ricketts, R. H.; Sobieszczanski, J.

    1977-01-01

    A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.

  8. Airflow and Particle Transport Through Human Airways: A Systematic Review

    NASA Astrophysics Data System (ADS)

    Kharat, S. B.; Deoghare, A. B.; Pandey, K. M.

    2017-08-01

    This paper describes review of the relevant literature about two phase analysis of air and particle flow through human airways. An emphasis of the review is placed on elaborating the steps involved in two phase analysis, which are Geometric modelling methods and Mathematical models. The first two parts describes various approaches that are followed for constructing an Airway model upon which analysis are conducted. Broad two categories of geometric modelling viz. Simplified modelling and Accurate modelling using medical scans are discussed briefly. Ease and limitations of simplified models, then examples of CT based models are discussed. In later part of the review different mathematical models implemented by researchers for analysis are briefed. Mathematical models used for Air and Particle phases are elaborated separately.

  9. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures

    NASA Astrophysics Data System (ADS)

    Balagopal, R.; Prasad Rao, N.; Rokade, R. P.

    2018-04-01

    Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.

  11. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  12. The limitations of simple gene set enrichment analysis assuming gene independence.

    PubMed

    Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P

    2016-02-01

    Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.

  13. Efficient calculation of the polarizability: a simplified effective-energy technique

    NASA Astrophysics Data System (ADS)

    Berger, J. A.; Reining, L.; Sottile, F.

    2012-09-01

    In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.

  14. A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.

    PubMed

    Robitaille, Line; Hoffer, L John

    2016-04-21

    In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.

  15. Simplified Processing Method for Meter Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, Kimberly M.; Colotelo, Alison H. A.; Downs, Janelle L.

    2015-11-01

    Simple/Quick metered data processing method that can be used for Army Metered Data Management System (MDMS) and Logistics Innovation Agency data, but may also be useful for other large data sets. Intended for large data sets when analyst has little information about the buildings.

  16. A new method to identify the foot of continental slope based on an integrated profile analysis

    NASA Astrophysics Data System (ADS)

    Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin

    2017-06-01

    A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.

  17. A simplified analysis of propulsion installation losses for computerized aircraft design

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.; Nelms, W. P., Jr.; Bailey, R. O.

    1976-01-01

    A simplified method is presented for computing the installation losses of aircraft gas turbine propulsion systems. The method has been programmed for use in computer aided conceptual aircraft design studies that cover a broad range of Mach numbers and altitudes. The items computed are: inlet size, pressure recovery, additive drag, subsonic spillage drag, bleed and bypass drags, auxiliary air systems drag, boundary-layer diverter drag, nozzle boattail drag, and the interference drag on the region adjacent to multiple nozzle installations. The methods for computing each of these installation effects are described and computer codes for the calculation of these effects are furnished. The results of these methods are compared with selected data for the F-5A and other aircraft. The computer program can be used with uninstalled engine performance information which is currently supplied by a cycle analysis program. The program, including comments, is about 600 FORTRAN statements long, and uses both theoretical and empirical techniques.

  18. Insights into Fourier Synthesis and Analysis: Part 2--A Simplified Mathematics.

    ERIC Educational Resources Information Center

    Moore, Guy S. M.

    1988-01-01

    Introduced is an analysis of a waveform into its Fourier components. Topics included are simplified analysis of a square waveform, a triangular waveform, half-wave rectified alternating current (AC), and impulses. Provides the mathematical expression and simplified analysis diagram of each waveform. (YP)

  19. The role of interest and inflation rates in life-cycle cost analysis

    NASA Technical Reports Server (NTRS)

    Eisenberger, I.; Remer, D. S.; Lorden, G.

    1978-01-01

    The effect of projected interest and inflation rates on life cycle cost calculations is discussed and a method is proposed for making such calculations which replaces these rates by a single parameter. Besides simplifying the analysis, the method clarifies the roles of these rates. An analysis of historical interest and inflation rates from 1950 to 1976 shows that the proposed method can be expected to yield very good projections of life cycle cost even if the rates themselves fluctuate considerably.

  20. A simplified method for monomeric carbohydrate analysis of corn stover biomass

    USDA-ARS?s Scientific Manuscript database

    Constituent determination of biomass for theoretical ethanol yield (TEY) estimation requires the removal of non-structural carbohydrates prior to analysis to prevent interference with the analytical procedure. According to the accepted U.S. Dept. of Energy-National Renewable Energy Laboratory (NREL)...

  1. Stoichiometric network analysis and associated dimensionless kinetic equations. Application to a model of the Bray-Liebhafsky reaction.

    PubMed

    Schmitz, Guy; Kolar-Anić, Ljiljana Z; Anić, Slobodan R; Cupić, Zeljko D

    2008-12-25

    The stoichiometric network analysis (SNA) introduced by B. L. Clarke is applied to a simplified model of the complex oscillating Bray-Liebhafsky reaction under batch conditions, which was not examined by this method earlier. This powerful method for the analysis of steady-states stability is also used to transform the classical differential equations into dimensionless equations. This transformation is easy and leads to a form of the equations combining the advantages of classical dimensionless equations with the advantages of the SNA. The used dimensionless parameters have orders of magnitude given by the experimental information about concentrations and currents. This simplifies greatly the study of the slow manifold and shows which parameters are essential for controlling its shape and consequently have an important influence on the trajectories. The effectiveness of these equations is illustrated on two examples: the study of the bifurcations points and a simple sensitivity analysis, different from the classical one, more based on the chemistry of the studied system.

  2. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  3. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    ERIC Educational Resources Information Center

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  4. A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid

    2004-01-01

    Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.

  5. Simplified method to solve sound transmission through structures lined with elastic porous material.

    PubMed

    Lee, J H; Kim, J

    2001-11-01

    An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.

  6. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  7. Verification of rain-flow reconstructions of a variable amplitude load history. M.S. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Clothiaux, John D.; Dowling, Norman E.

    1992-01-01

    The suitability of using rain-flow reconstructions as an alternative to an original loading spectrum for component fatigue life testing is investigated. A modified helicopter maneuver history is used for the rain-flow cycle counting and history regenerations. Experimental testing on a notched test specimen over a wide range of loads produces similar lives for the original history and the reconstructions. The test lives also agree with a simplified local strain analysis performed on the specimen utilizing the rain-flow cycle count. The rain-flow reconstruction technique is shown to be a viable test spectrum alternative to storing the complete original load history, especially in saving computer storage space and processing time. A description of the regeneration method, the simplified life prediction analysis, and the experimental methods are included in the investigation.

  8. Simplified criteria for diagnosing superficial esophageal squamous neoplasms using Narrow Band Imaging magnifying endoscopy

    PubMed Central

    Dobashi, Akira; Goda, Kenichi; Yoshimura, Noboru; Ohya, Tomohiko R; Kato, Masayuki; Sumiyama, Kazuki; Matsushima, Masato; Hirooka, Shinichi; Ikegami, Masahiro; Tajiri, Hisao

    2016-01-01

    AIM To simplify the diagnostic criteria for superficial esophageal squamous cell carcinoma (SESCC) on Narrow Band Imaging combined with magnifying endoscopy (NBI-ME). METHODS This study was based on the post-hoc analysis of a randomized controlled trial. We performed NBI-ME for 147 patients with present or a history of squamous cell carcinoma in the head and neck, or esophagus between January 2009 and June 2011. Two expert endoscopists detected 89 lesions that were suspicious for SESCC lesions, which had been prospectively evaluated for the following 6 NBI-ME findings in real time: “intervascular background coloration”; “proliferation of intrapapillary capillary loops (IPCL)”; and “dilation”, “tortuosity”, “change in caliber”, and “various shapes (VS)” of IPCLs (i.e., Inoue’s tetrad criteria). The histologic examination of specimens was defined as the gold standard for diagnosis. A stepwise logistic regression analysis was used to identify candidates for the simplified criteria from among the 6 NBI-ME findings for diagnosing SESCCs. We evaluated diagnostic performance of the simplified criteria compared with that of Inoue’s criteria. RESULTS Fifty-four lesions (65%) were histologically diagnosed as SESCCs and the others as low-grade intraepithelial neoplasia or inflammation. In the univariate analysis, proliferation, tortuosity, change in caliber, and VS were significantly associated with SESCC (P < 0.01). The combination of VS and proliferation was statistically extracted from the 6 NBI-ME findings by using the stepwise logistic regression model. We defined the combination of VS and proliferation as simplified dyad criteria for SESCC. The areas under the curve of the simplified dyad criteria and Inoue’s tetrad criteria were 0.70 and 0.73, respectively. No significant difference was shown between them. The sensitivity, specificity, and accuracy of diagnosis for SESCC were 77.8%, 57.1%, 69.7% and 51.9%, 80.0%, 62.9% for the simplified dyad criteria and Inoue’s tetrad criteria, respectively. CONCLUSION The combination of proliferation and VS may serve as simplified criteria for the diagnosis of SESCC using NBI-ME. PMID:27895406

  9. The experimental determination of the moments of inertia of airplanes by a simplified compound-pendulum method

    NASA Technical Reports Server (NTRS)

    Gracey, William

    1948-01-01

    A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)

  10. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    PubMed Central

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-01-01

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406

  11. Use of fluorinated polybrominated diphenyl ethers and simplified cleanup for the analysis of polybrominated diphenyl ethers in house dust

    EPA Science Inventory

    A simple, cost-effective method is described for the analysis of polybrominated diphenyl ethers (PBDEs) in house dust using pressurized fluid extraction, cleanup with modified silica solid phase extraction tubes, and fluorinated internal standards. There are 14 PBDE congeners inc...

  12. A framework for performing workplace hazard and risk analysis: a participative ergonomics approach.

    PubMed

    Morag, Ido; Luria, Gil

    2013-01-01

    Despite the unanimity among researchers about the centrality of workplace analysis based on participatory ergonomics (PE) as a basis for preventive interventions, there is still little agreement about the necessary of a theoretical framework for providing practical guidance. In an effort to develop a conceptual PE framework, the authors, focusing on 20 studies, found five primary dimensions for characterising an analytical structure: (1) extent of workforce involvement; (2) analysis duration; (3) diversity of reporter role types; (4) scope of analysis and (5) supportive information system for analysis management. An ergonomics analysis carried out in a chemical manufacturing plant serves as a case study for evaluating the proposed framework. The study simultaneously demonstrates the five dimensions and evaluates their feasibility. The study showed that managerial leadership was fundamental to the successful implementation of the analysis; that all job holders should participate in analysing their own workplace and simplified reporting methods contributed to a desirable outcome. This paper seeks to clarify the scope of workplace ergonomics analysis by offering a theoretical and structured framework for providing practical advice and guidance. Essential to successfully implementing the analytical framework are managerial involvement, participation of all job holders and simplified reporting methods.

  13. A Simplified Digestion Protocol for the Analysis of Hg in Fish by Cold Vapor Atomic Absorption Spectroscopy

    ERIC Educational Resources Information Center

    Kristian, Kathleen E.; Friedbauer, Scott; Kabashi, Donika; Ferencz, Kristen M.; Barajas, Jennifer C.; O'Brien, Kelly

    2015-01-01

    Analysis of mercury in fish is an interesting problem with the potential to motivate students in chemistry laboratory courses. The recommended method for mercury analysis in fish is cold vapor atomic absorption spectroscopy (CVAAS), which requires homogeneous analyte solutions, typically prepared by acid digestion. Previously published digestion…

  14. Reduction method with system analysis for multiobjective optimization-based design

    NASA Technical Reports Server (NTRS)

    Azarm, S.; Sobieszczanski-Sobieski, J.

    1993-01-01

    An approach for reducing the number of variables and constraints, which is combined with System Analysis Equations (SAE), for multiobjective optimization-based design is presented. In order to develop a simplified analysis model, the SAE is computed outside an optimization loop and then approximated for use by an operator. Two examples are presented to demonstrate the approach.

  15. A simplified computer solution for the flexibility matrix of contacting teeth for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Hsu, C. Y.; Cheng, H. S.

    1987-01-01

    A computer code, FLEXM, was developed to calculate the flexibility matrices of contacting teeth for spiral bevel gears using a simplified analysis based on the elementary beam theory for the deformation of gear and shaft. The simplified theory requires a computer time at least one order of magnitude less than that needed for the complete finite element method analysis reported earlier by H. Chao, and it is much easier to apply for different gear and shaft geometries. Results were obtained for a set of spiral bevel gears. The teeth deflections due to torsion, bending moment, shearing strain and axial force were found to be in the order 10(-5), 10(-6), 10(-7), and 10(-8) respectively. Thus, the torsional deformation was the most predominant factor. In the analysis of dynamic load, response frequencies were found to be larger when the mass or moment of inertia was smaller or the stiffness was larger. The change in damping coefficient had little influence on the resonance frequency, but has a marked influence on the dynamic load at the resonant frequencies.

  16. A Simplified Approach to Risk Assessment Based on System Dynamics: An Industrial Case Study.

    PubMed

    Garbolino, Emmanuel; Chery, Jean-Pierre; Guarnieri, Franck

    2016-01-01

    Seveso plants are complex sociotechnical systems, which makes it appropriate to support any risk assessment with a model of the system. However, more often than not, this step is only partially addressed, simplified, or avoided in safety reports. At the same time, investigations have shown that the complexity of industrial systems is frequently a factor in accidents, due to interactions between their technical, human, and organizational dimensions. In order to handle both this complexity and changes in the system over time, this article proposes an original and simplified qualitative risk evaluation method based on the system dynamics theory developed by Forrester in the early 1960s. The methodology supports the development of a dynamic risk assessment framework dedicated to industrial activities. It consists of 10 complementary steps grouped into two main activities: system dynamics modeling of the sociotechnical system and risk analysis. This system dynamics risk analysis is applied to a case study of a chemical plant and provides a way to assess the technological and organizational components of safety. © 2016 Society for Risk Analysis.

  17. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  18. Chemical profiling approach to evaluate the influence of traditional and simplified decoction methods on the holistic quality of Da-Huang-Xiao-Shi decoction using high-performance liquid chromatography coupled with diode-array detection and time-of-flight mass spectrometry.

    PubMed

    Yan, Xuemei; Zhang, Qianying; Feng, Fang

    2016-04-01

    Da-Huang-Xiao-Shi decoction, consisting of Rheum officinale Baill, Mirabilitum, Phellodendron amurense Rupr. and Gardenia jasminoides Ellis, is a traditional Chinese medicine used for the treatment of jaundice. As described in "Jin Kui Yao Lue", a traditional multistep decoction of Da-Huang-Xiao-Shi decoction was required while simplified one-step decoction was used in recent repsorts. To investigate the chemical difference between the decoctions obtained by the traditional and simplified preparations, a sensitive and reliable approach of high-performance liquid chromatography coupled with diode-array detection and electrospray ionization time-of-flight mass spectrometry was established. As a result, a total of 105 compounds were detected and identified. Analysis of the chromatogram profiles of the two decoctions showed that many compounds in the decoction of simplified preparation had changed obviously compared with those in traditional preparation. The changes of constituents would be bound to cause the differences in the therapeutic effects of the two decoctions. The present study demonstrated that certain preparation methods significantly affect the holistic quality of traditional Chinese medicines and the use of a suitable preparation method is crucial for these medicines to produce special clinical curative effect. This research results elucidated the scientific basis of traditional preparation methods in Chinese medicines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  20. An Analysis of Once-per-revolution Oscillating Aerodynamic Thrust Loads on Single-Rotation Propellers on Tractor Airplanes at Zero Yaw

    NASA Technical Reports Server (NTRS)

    Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III

    1956-01-01

    A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.

  1. Detection and electrical characterization of hidden layers using time-domain analysis of terahertz reflections

    NASA Astrophysics Data System (ADS)

    Geltner, I.; Hashimshony, D.; Zigler, A.

    2002-07-01

    We use a time-domain analysis method to characterize the outer layer of a multilayer structure regardless of the inner ones, thus simplifying the characterization of all the layers. We combine this method with THz reflection spectroscopy to detect nondestructively a hidden aluminum oxide layer under opaque paint and to measure its conductivity and high-frequency dielectric constant in the THz range.

  2. Using recurrence plot analysis for software execution interpretation and fault detection

    NASA Astrophysics Data System (ADS)

    Mosdorf, M.

    2015-09-01

    This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.

  3. [Evaluation of a simplified index (spectral entropy) about sleep state of electrocardiogram recorded by a simplified polygraph, MemCalc-Makin2].

    PubMed

    Ohisa, Noriko; Ogawa, Hiromasa; Murayama, Nobuki; Yoshida, Katsumi

    2010-02-01

    Polysomnography (PSG) is the gold standard for the diagnosis of sleep apnea hypopnea syndrome (SAHS), but it takes time to analyze the PSG and PSG cannot be performed repeatedly because of efforts and costs. Therefore, simplified sleep respiratory disorder indices in which are reflected the PSG results are needed. The Memcalc method, which is a combination of the maximum entropy method for spectral analysis and the non-linear least squares method for fitting analysis (Makin2, Suwa Trust, Tokyo, Japan) has recently been developed. Spectral entropy which is derived by the Memcalc method might be useful to expressing the trend of time-series behavior. Spectral entropy of ECG which is calculated with the Memcalc method was evaluated by comparing to the PSG results. Obstructive SAS patients (n = 79) and control volanteer (n = 7) ECG was recorded using MemCalc-Makin2 (GMS) with PSG recording using Alice IV (Respironics) from 20:00 to 6:00. Spectral entropy of ECG, which was calculated every 2 seconds using the Memcalc method, was compared to sleep stages which were analyzed manually from PSG recordings. Spectral entropy value (-0.473 vs. -0.418, p < 0.05) were significantly increased in the OSAHS compared to the control. For the entropy cutoff level of -0.423, sensitivity and specificity for OSAHS were 86.1% and 71.4%, respectively, resulting in a receiver operating characteristic with an area under the curve of 0.837. The absolute value of entropy had inverse correlation with stage 3. Spectral entropy, which was calculated with Memcalc method, might be a possible index evaluating the quality of sleep.

  4. Analysis of simplified heat transfer models for thermal property determination of nano-film by TDTR method

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei

    2018-03-01

    Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.

  5. Generalized fictitious methods for fluid-structure interactions: Analysis and simulations

    NASA Astrophysics Data System (ADS)

    Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em

    2013-07-01

    We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.

  6. A simplified and powerful image processing methods to separate Thai jasmine rice and sticky rice varieties

    NASA Astrophysics Data System (ADS)

    Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya

    2018-03-01

    A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.

  7. Analysis of time-of-flight spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, E.M.; Foxon, C.T.; Zhang, J.

    1990-07-01

    A simplified method of data analysis for time of flight measurements of the velocity of molecular beams sources is described. This method does not require the complex data fitting previously used in such studies. The method is applied to the study of Pb molecular beams from a true Knudsen source and has been used to show that a VG Quadrupoles SXP300H mass spectrometer, when fitted with an open cross-beam ionizer, acts as an ideal density detector over a wide range of operating conditions.

  8. Simplified MPN method for enumeration of soil naphthalene degraders using gaseous substrate.

    PubMed

    Wallenius, Kaisa; Lappi, Kaisa; Mikkonen, Anu; Wickström, Annika; Vaalama, Anu; Lehtinen, Taru; Suominen, Leena

    2012-02-01

    We describe a simplified microplate most-probable-number (MPN) procedure to quantify the bacterial naphthalene degrader population in soil samples. In this method, the sole substrate naphthalene is dosed passively via gaseous phase to liquid medium and the detection of growth is based on the automated measurement of turbidity using an absorbance reader. The performance of the new method was evaluated by comparison with a recently introduced method in which the substrate is dissolved in inert silicone oil and added individually to each well, and the results are scored visually using a respiration indicator dye. Oil-contaminated industrial soil showed slightly but significantly higher MPN estimate with our method than with the reference method. This suggests that gaseous naphthalene was dissolved in an adequate concentration to support the growth of naphthalene degraders without being too toxic. The dosing of substrate via gaseous phase notably reduced the work load and risk of contamination. The result scoring by absorbance measurement was objective and more reliable than measurement with indicator dye, and it also enabled further analysis of cultures. Several bacterial genera were identified by cloning and sequencing of 16S rRNA genes from the MPN wells incubated in the presence of gaseous naphthalene. In addition, the applicability of the simplified MPN method was demonstrated by a significant positive correlation between the level of oil contamination and the number of naphthalene degraders detected in soil.

  9. Simplified Models for the Study of Postbuckled Hat-Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Vescovini, Riccardo; Davila, Carlos G.; Bisagni, Chiara

    2012-01-01

    The postbuckling response and failure of multistringer stiffened panels is analyzed using models with three levels of approximation. The first model uses a relatively coarse mesh to capture the global postbuckling response of a five-stringer panel. The second model can predict the nonlinear response as well as the debonding and crippling failure mechanisms in a single stringer compression specimen (SSCS). The third model consists of a simplified version of the SSCS that is designed to minimize the computational effort. The simplified model is well-suited to perform sensitivity analyses for studying the phenomena that lead to structural collapse. In particular, the simplified model is used to obtain a deeper understanding of the role played by geometric and material modeling parameters such as mesh size, inter-laminar strength, fracture toughness, and fracture mode mixity. Finally, a global/local damage analysis method is proposed in which a detailed local model is used to scan the global model to identify the locations that are most critical for damage tolerance.

  10. Simplified Discontinuous Galerkin Methods for Systems of Conservation Laws with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1999-01-01

    Simplified forms of the space-time discontinuous Galerkin (DG) and discontinuous Galerkin least-squares (DGLS) finite element method are developed and analyzed. The new formulations exploit simplifying properties of entropy endowed conservation law systems while retaining the favorable energy properties associated with symmetric variable formulations.

  11. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    PubMed

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  12. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  13. The analysis of Stability reliability of Qian Tang River seawall

    NASA Astrophysics Data System (ADS)

    Wu, Xue-Xiong

    2017-11-01

    Qiantang River seawall due to high water soaking pond by foreshore scour, encountered during the low tide prone slope overall instability. Considering the seawall beach scour in front of random change, using the simplified Bishop method, combined with the variability of soil mechanics parameters, calculation and analysis of Qiantang River Xiasha seawall segments of the overall stability.

  14. Technology Overview for Advanced Aircraft Armament System Program.

    DTIC Science & Technology

    1981-05-01

    availability of methods or systems for improving stores and armament safety. Of particular importance are aspects of safety involving hazards analysis ...flutter virtually insensitive to inertia and center-of- gravity location of store - Simplifies and reduces analysis and testing required to flutter- clear...status. Nearly every existing reliability analysis and discipline that prom- ised a positive return on reliability performance was drawn out, dusted

  15. Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth

    2014-12-01

    There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.

  16. Determination of post-shakedown quantities of a pipe bend via the simplified theory of plastic zones compared with load history dependent incremental analysis

    NASA Astrophysics Data System (ADS)

    Vollrath, Bastian; Hübel, Hartwig

    2018-01-01

    The Simplified Theory of Plastic Zones (STPZ) may be used to determine post-shakedown quantities such as strain ranges and accumulated strains at plastic or elastic shakedown. The principles of the method are summarized. Its practical applicability is shown by the example of a pipe bend subjected to constant internal pressure along with cyclic in-plane bending or/and cyclic radial temperature gradient. The results are compared with incremental analyses performed step-by-step throughout the entire load history until the state of plastic shakedown is achieved.

  17. Simplified method for calculating shear deflections of beams.

    Treesearch

    I. Orosz

    1970-01-01

    When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...

  18. 48 CFR 13.305-4 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Section 13.305-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.305-4... purchase requisition, contracting officer verification statement, or other agency approved method of...

  19. 3DHZETRN: Inhomogeneous Geometry Issues

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.

    2017-01-01

    Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.

  20. A case study by life cycle assessment

    NASA Astrophysics Data System (ADS)

    Li, Shuyun

    2017-05-01

    This article aims to assess the potential environmental impact of an electrical grinder during its life cycle. The Life Cycle Inventory Analysis was conducted based on the Simplified Life Cycle Assessment (SLCA) Drivers that calculated from the Valuation of Social Cost and Simplified Life Cycle Assessment Model (VSSM). The detailed results for LCI can be found under Appendix II. The Life Cycle Impact Assessment was performed based on Eco-indicator 99 method. The analysis results indicated that the major contributor to the environmental impact as it accounts for over 60% overall SLCA output. In which, 60% of the emission resulted from the logistic required for the maintenance activities. This was measured by conducting the hotspot analysis. After performing sensitivity analysis, it is evidenced that changing fuel type results in significant decrease environmental footprint. The environmental benefit can also be seen from the negative output values of the recycling activities. By conducting Life Cycle Assessment analysis, the potential environmental impact of the electrical grinder was investigated.

  1. PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD

    NASA Astrophysics Data System (ADS)

    Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao

    Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.

  2. A simplified method for numerical simulation of gas grilling of non-intact beef steaks to elimate Escherichia coli O157:H7

    USDA-ARS?s Scientific Manuscript database

    The objective of this work was to develop a numerical simulation method to study gas grilling of non-intact beef steaks (NIBS) and evaluate the effectiveness of grilling on inactivation of Escherichia coli O157:H7. A numerical analysis program was developed to determine the effective heat transfer ...

  3. Improved Design of Tunnel Supports : Volume 1 : Simplified Analysis for Ground-Structure Interaction in Tunneling

    DOT National Transportation Integrated Search

    1980-06-01

    The purpose of this report is to provide the tunneling profession with improved practical tools in the technical or design area, which provide more accurate representations of the ground-structure interaction in tunneling. The design methods range fr...

  4. FDTD modeling of thin impedance sheets

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    Thin sheets of resistive or dielectric material are commonly encountered in radar cross section calculations. Analysis of such sheets is simplified by using sheet impedances. In this paper it is shown that sheet impedances can be modeled easily and accurately using Finite Difference Time Domain (FDTD) methods.

  5. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  6. NECAP 4.1: NASA's energy-cost analysis program user's manual

    NASA Technical Reports Server (NTRS)

    Jensen, R. N.; Henninger, R. H.; Miner, D. L.

    1983-01-01

    The Enery Cost Analysis Program (NECAP) is a powerful computerized method to determine and to minimize building energy consumption. The program calculates hourly heat gain or losses taking into account the building thermal resistance and mass, using hourly weather and a "response factor' method. Internal temperatures are allowed to vary in accordance with thermostat settings and equipment capacity. A simplified input procedure and numerous other technical improvements are presented. This Users Manual describes the program and provides examples.

  7. Design and Strength check of Large Blow Molding Machine Rack

    NASA Astrophysics Data System (ADS)

    Fei-fei, GU; Zhi-song, ZHU; Xiao-zhao, YAN; Yi-min, ZHU

    Design procedure of large blow moulding machine rack is discussed in the article. A strength checking method is presented. Finite element analysis is conducted in the design procedure by ANSYS software. The actual situation of the rack load bearing is fully considered. The necessary means to simplify the model are done. The dimensional linear element Beam 188 is analyzed. MESH200 is used to mesh. Therefore, it simplifies the analysis process and improves computational efficiency. The maximum deformation of rack is 8.037 mm: it is occurred in the position of accumulator head. The result states: it meets the national standard curvature which is not greater than 0.3% of the total channel length; it also meets strength requirement that the maximum stress was 54.112 MPa.

  8. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  9. Identification and characterization of unrecognized viruses in stool samples of non-polio acute flaccid paralysis children by simplified VIDISCA.

    PubMed

    Shaukat, Shahzad; Angez, Mehar; Alam, Muhammad Masroor; Jebbink, Maarten F; Deijs, Martin; Canuti, Marta; Sharif, Salmaan; de Vries, Michel; Khurshid, Adnan; Mahmood, Tariq; van der Hoek, Lia; Zaidi, Syed Sohail Zahoor

    2014-08-12

    The use of sequence independent methods combined with next generation sequencing for identification purposes in clinical samples appears promising and exciting results have been achieved to understand unexplained infections. One sequence independent method, Virus Discovery based on cDNA Amplified Fragment Length Polymorphism (VIDISCA) is capable of identifying viruses that would have remained unidentified in standard diagnostics or cell cultures. VIDISCA is normally combined with next generation sequencing, however, we set up a simplified VIDISCA which can be used in case next generation sequencing is not possible. Stool samples of 10 patients with unexplained acute flaccid paralysis showing cytopathic effect in rhabdomyosarcoma cells and/or mouse cells were used to test the efficiency of this method. To further characterize the viruses, VIDISCA-positive samples were amplified and sequenced with gene specific primers. Simplified VIDISCA detected seven viruses (70%) and the proportion of eukaryotic viral sequences from each sample ranged from 8.3 to 45.8%. Human enterovirus EV-B97, EV-B100, echovirus-9 and echovirus-21, human parechovirus type-3, human astrovirus probably a type-3/5 recombinant, and tetnovirus-1 were identified. Phylogenetic analysis based on the VP1 region demonstrated that the human enteroviruses are more divergent isolates circulating in the community. Our data support that a simplified VIDISCA protocol can efficiently identify unrecognized viruses grown in cell culture with low cost, limited time without need of advanced technical expertise. Also complex data interpretation is avoided thus the method can be used as a powerful diagnostic tool in limited resources. Redesigning the routine diagnostics might lead to additional detection of previously undiagnosed viruses in clinical samples of patients.

  10. Simplified Model and Response Analysis for Crankshaft of Air Compressor

    NASA Astrophysics Data System (ADS)

    Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang

    2017-11-01

    The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.

  11. RAPID AND SIMPLIFIED HPLC METHOD WITH UV DETECTION, PH CONTROL AND SELECTIVE DECHLORINATOR FOR CYANURIC ACID ANALYSIS IN WATER

    EPA Science Inventory

    Cyanuric acid (CA) and chloroisocyanurates are commonly used as standard ingredients in formulations for household bleaches, industrial cleansers, dishwasher compounds, general sanitizers, and chlorine stabilizers. They are very well known for preventing the photolytic decomposi...

  12. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  13. Preliminary phenomena identification and ranking tables for simplified boiling water reactor Loss-of-Coolant Accident scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroeger, P.G.; Rohatgi, U.S.; Jo, J.H.

    1998-04-01

    For three potential Loss-of-Coolant Accident (LOCA) scenarios in the General Electric Simplified Boiling Water Reactors (SBWR) a set of Phenomena Identification and Ranking Tables (PIRT) is presented. The selected LOCA scenarios are typical for the class of small and large breaks generally considered in Safety Analysis Reports. The method used to develop the PIRTs is described. Following is a discussion of the transient scenarios, the PIRTs are presented and discussed in detailed and in summarized form. A procedure for future validation of the PIRTs, to enhance their value, is outlined. 26 refs., 25 figs., 44 tabs.

  14. Analysis of low-marbled Hanwoo cow meat aged with different dry-aging methods

    PubMed Central

    Lee, Hyun Jung; Choe, Juhui; Kim, Kwan Tae; Oh, Jungmin; Lee, Da Gyeom; Kwon, Ki Moon; Choi, Yang Il; Jo, Cheorun

    2017-01-01

    Objective Different dry-aging methods [traditional dry-aging (TD), simplified dry-aging (SD), and SD in an aging bag (SDB)] were compared to investigate the possible use of SD and/or SDB in practical situations. Methods Sirloins from 48 Hanwoo cows were frozen (Control, 2 days postmortem) or dry-aged for 28 days using the different aging methods and analyzed for chemical composition, total aerobic bacterial count, shear force, inosine 5′-monophosphate (IMP) and free amino acid content, and sensory properties. Results The difference in chemical composition, total aerobic bacterial count, shear force, IMP, and total free amino acid content were negligible among the 3 dry-aged groups. The SD and SDB showed statistically similar tenderness, flavor, and overall acceptability relative to TD. However, SDB had a relatively higher saleable yield. Conclusion Both SD and SDB can successfully substitute for TD. However, SDB would be the best option for simplified dry-aging of low-marbled beef with a relatively high saleable yield. PMID:28728384

  15. Software Defined Network Monitoring Scheme Using Spectral Graph Theory and Phantom Nodes

    DTIC Science & Technology

    2014-09-01

    networks is the emergence of software - defined networking ( SDN ) [1]. SDN has existed for the...Chapter III for network monitoring. A. SOFTWARE DEFINED NETWORKS SDNs provide a new and innovative method to simplify network hardware by logically...and R. Giladi, “Performance analysis of software - defined networking ( SDN ),” in Proc. of IEEE 21st International Symposium on Modeling, Analysis

  16. Simplified Microarray Technique for Identifying mRNA in Rare Samples

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo; Kadambi, Geeta

    2007-01-01

    Two simplified methods of identifying messenger ribonucleic acid (mRNA), and compact, low-power apparatuses to implement the methods, are at the proof-of-concept stage of development. These methods are related to traditional methods based on hybridization of nucleic acid, but whereas the traditional methods must be practiced in laboratory settings, these methods could be practiced in field settings. Hybridization of nucleic acid is a powerful technique for detection of specific complementary nucleic acid sequences, and is increasingly being used for detection of changes in gene expression in microarrays containing thousands of gene probes. A traditional microarray study entails at least the following six steps: 1. Purification of cellular RNA, 2. Amplification of complementary deoxyribonucleic acid [cDNA] by polymerase chain reaction (PCR), 3. Labeling of cDNA with fluorophores of Cy3 (a green cyanine dye) and Cy5 (a red cyanine dye), 4. Hybridization to a microarray chip, 5. Fluorescence scanning the array(s) with dual excitation wavelengths, and 6. Analysis of the resulting images. This six-step procedure must be performed in a laboratory because it requires bulky equipment.

  17. A Simplified Method of Eliciting Information from Novices.

    ERIC Educational Resources Information Center

    Brandt, D. Scott; Uden, Lorna

    2002-01-01

    Discusses the use of applied cognitive task analysis (ACTA) to interview novices and gain insight into their cognitive skills and processes. Focuses particularly on novice Internet searchers at the University of Staffordshire (United Kingdom) and reviews attempts to modify ACTA, which is intended to gather information from experts as part of…

  18. A control-volume method for analysis of unsteady thrust augmenting ejector flows

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1988-01-01

    A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.

  19. PROOF OF CONCEPT FOR A HUMAN RELIABILITY ANALYSIS METHOD FOR HEURISTIC USABILITY EVALUATION OF SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman; Jeffrey C. Joe

    2005-09-01

    An ongoing issue within human-computer interaction (HCI) is the need for simplified or “discount” methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining humancentered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings withmore » HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI.« less

  20. Economic impact of simplified de Gramont regimen in first-line therapy in metastatic colorectal cancer.

    PubMed

    Limat, Samuel; Bracco-Nolin, Claire-Hélène; Legat-Fagnoni, Christine; Chaigneau, Loic; Stein, Ulrich; Huchet, Bernard; Pivot, Xavier; Woronoff-Lemsi, Marie-Christine

    2006-06-01

    The cost of chemotherapy has dramatically increased in advanced colorectal cancer patients, and the schedule of fluorouracil administration appears to be a determining factor. This retrospective study compared direct medical costs related to two different de Gramont schedules (standard vs. simplified) given in first-line chemotherapy with oxaliplatin or irinotecan. This cost-minimization analysis was performed from the French Health System perspective. Consecutive unselected patients treated in first-line therapy by LV5FU2 de Gramont with oxaliplatin (Folfox regimen) or with irinotecan (Folfiri regimen) were enrolled. Hospital and outpatient resources related to chemotherapy and adverse events were collected from 1999 to 2004 in 87 patients. Overall cost was reduced in the simplified regimen. The major factor which explained cost saving was the lower need for admissions for chemotherapy. Amount of cost saving depended on the method for assessing hospital stay. In patients treated by the Folfox regimen the per diem and DRG methods found cost savings of Euro 1,997 and Euro 5,982 according to studied schedules; in patients treated by Folfiri regimen cost savings of Euro 4,773 and Euro 7,274 were observed, respectively. In addition, travel costs were also reduced by simplified regimens. The robustness of our results was showed by one-way sensitivity analyses. These findings demonstrate that the simplified de Gramont schedule reduces costs of current first-line chemotherapy in advanced colorectal cancer. Interestingly, our study showed several differences in costs between two costing approaches of hospital stay: average per diem and DRG costs. These results suggested that standard regimen may be considered a profitable strategy from the hospital perspective. The opposition between health system perspective and hospital perspective is worth examining and may affect daily practices. In conclusion, our study shows that the simplified de Gramont schedule in combination with oxaliplatin or irinotecan is an attractive option from the French Health System perspective. This safe and less costly regimen must compared to alternative options such as oral fluoropyrimidines.

  1. [The subject matters concerned with use of simplified analytical systems from the perspective of the Japanese Association of Medical Technologists].

    PubMed

    Morishita, Y

    2001-05-01

    The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.

  2. Notification: Methods for Procuring Supplies and Services Under Simplified Acquisition Procedures

    EPA Pesticide Factsheets

    Project #OA-FY15-0193, June 18, 2015. The EPA OIG plans to begin the preliminary research phase of auditing the methods used in procuring supplies and services under simplified acquisition procedures.

  3. Comparison of matrix effects in HPLC-MS/MS and UPLC-MS/MS analysis of nine basic pharmaceuticals in surface waters.

    PubMed

    Van De Steene, Jet C; Lambert, Willy E

    2008-05-01

    When developing an LC-MS/MS-method matrix effects are a major issue. The effect of co-eluting compounds arising from the matrix can result in signal enhancement or suppression. During method development much attention should be paid to diminishing matrix effects as much as possible. The present work evaluates matrix effects from aqueous environmental samples in the simultaneous analysis of a group of 9 specific pharmaceuticals with HPLC-ESI/MS/MS and UPLC-ESI/MS/MS: flubendazole, propiconazole, pipamperone, cinnarizine, ketoconazole, miconazole, rabeprazole, itraconazole and domperidone. When HPLC-MS/MS is used, matrix effects are substantial and can not be compensated for with analogue internal standards. For different surface water samples different matrix effects are found. For accurate quantification the standard addition approach is necessary. Due to the better resolution and more narrow peaks in UPLC, analytes will co-elute less with interferences during ionisation, so matrix effects could be lower, or even eliminated. If matrix effects are eliminated with this technique, the standard addition method for quantification can be omitted and the overall method will be simplified. Results show that matrix effects are almost eliminated if internal standards (structural analogues) are used. Instead of the time-consuming and labour-intensive standard addition method, with UPLC the internal standardization can be used for quantification and the overall method is substantially simplified.

  4. Determination of the succinonitrile-benzene and succinonitrile-cyclohexanol phase diagrams by thermal and UV spectroscopic analysis

    NASA Technical Reports Server (NTRS)

    Kaukler, W. F.; Frazier, D. O.; Facemire, B.

    1984-01-01

    Equilibrium temperature-composition diagrams were determined for the two organic systems, succinonitrile-benzene and succinonitrile-cyclohexanol. Measurements were made using the common thermal analysis methods and UV spectrophotometry. Succinonitrile-benzene monotectic was chosen for its low affinity for water and because UV analysis would be simplified. Succinonitrile-cyclohexanol was chosen because both components are transparent models for metallic solidification, as opposed to the other known succinonitrile-based monotectics.

  5. 48 CFR 13.304 - [Reserved

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false [Reserved] 13.304 Section 13.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.304 [Reserved] ...

  6. Structural analysis for preliminary design of High Speed Civil Transport (HSCT)

    NASA Technical Reports Server (NTRS)

    Bhatia, Kumar G.

    1992-01-01

    In the preliminary design environment, there is a need for quick evaluation of configuration and material concepts. The simplified beam representations used in the subsonic, high aspect ratio wing platform are not applicable for low aspect ratio configurations typical of supersonic transports. There is a requirement to develop methods for efficient generation of structural arrangement and finite element representation to support multidisciplinary analysis and optimization. In addition, empirical data bases required to validate prediction methods need to be improved for high speed civil transport (HSCT) type configurations.

  7. Efficient Adaptive FIR and IIR Filters.

    DTIC Science & Technology

    1979-12-01

    Squared) algorithm. -An analysis of the simplified gradient approach is presented and confirmed experimentally for the specific example of an adaptive line...APPENDIX A - SIMULATION 130 A.1 - THE SIMULATION METHOD 130 A.2 - FIR SIMULATION PRO)GRAM 133 A.3 - IIR SIMULATION PROGRAM 136 APPENDIX B - RANDOM...surface. The generation of the reference signal is a key consi- deration in adaptive filter implementation. There are various practical methods as

  8. Concept for a fast analysis method of the energy dissipation at mechanical joints

    NASA Astrophysics Data System (ADS)

    Wolf, Alexander; Brosius, Alexander

    2017-10-01

    When designing hybrid parts and structures one major challenge is the design, production and quality assessment of the joining points. While the polymeric composites themselves have excellent material properties, the necessary joints are often the weak link in assembled structures. This paper presents a method of measuring and analysing the energy dissipation at mechanical joining points of hybrid parts. A simplified model is applied based on the characteristic response to different excitation frequencies and amplitudes. The dissipation from damage is the result of relative moments between joining partners und damaged fibres within the composite, whereas the visco-elastic material behaviour causes the intrinsic dissipation. The ambition is to transfer these research findings to the characterisation of mechanical joints in order to quickly assess the general quality of the joint with this non-destructive testing method. The inherent challenge for realising this method is the correct interpretation of the measured energy dissipation and its attribution to either a bad joining point or intrinsic material properties. In this paper the authors present the concept for energy dissipation measurements at different joining points. By inverse analysis a simplified fast semi-analytical model will be developed that allows for a quick basic quality assessment of a given joining point.

  9. 26 CFR 1.199-4 - Costs allocable to domestic production gross receipts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... using the simplified deduction method. Paragraph (f) of this section provides a small business... taxpayer for internal management or other business purposes; whether the method is used for other Federal... than a taxpayer that uses the small business simplified overall method of paragraph (f) of this section...

  10. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  11. Research study on stabilization and control: Modern sampled data control theory

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.; Yackel, R. A.

    1973-01-01

    A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.

  12. [Consideration about chemistry, manufacture and control (CMC) key problems in simplified registration of classical traditional Chinese medicine excellent prescriptions].

    PubMed

    Wang, Zhi-Min; Liu, Ju-Yan; Liu, Xiao-Qian; Wang, De-Qin; Yan, Li-Hua; Zhu, Jin-Jin; Gao, Hui-Min; Li, Chun; Wang, Jin-Yu; Li, Chu-Yuan; Ni, Qing-Chun; Huang, Ji-Sheng; Lin, Juan

    2017-05-01

    As an outstanding representative of traditional Chinese medicine(TCM) prescriptions accumulated from famous TCM doctors' clinical experiences in past dynasties, classical TCM excellent prescriptions (cTCMeP) are the most valuable part of TCM system. To support the research and development of cTCMeP, a series of regulations and measures were issued to encourage its simplified registration. There is still a long-way to go because many key problems and puzzles about technology, registration and administration in cTCMeP R&D process are not resolved. Based on the analysis of registration and management regulations of botanical drug products in FDA of USA and Japan, and EMA of Europe, the possible key problems and countermeasures in chemistry, manufacture and control (CMC) of simplified registration of cTCMeP were analyzed on the consideration of its actual situation. The method of "reference decoction extract by traditional prescription" (RDETP) was firstly proposed as standard to evaluate the quality and preparation uniformity between the new developing product under simplified registration and traditional original usages of cTCMeP, instead of Standard Decoction method in Japan. "Totality of the evidence" approach, mass balance and bioassay/biological assay of cTCMeP were emphatically suggested to introduce to the quality uniformity evaluation system in the raw drug material, drug substance and final product between the modern product and traditional decoction. Copyright© by the Chinese Pharmaceutical Association.

  13. 48 CFR 13.302 - Purchase orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Purchase orders. 13.302 Section 13.302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.302 Purchase...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid

    Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heatmore » sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.« less

  15. Application of the variational-asymptotical method to composite plates

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.

    1992-01-01

    A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.

  16. A simplified fragility analysis of fan type cable stayed bridges

    NASA Astrophysics Data System (ADS)

    Khan, R. A.; Datta, T. K.; Ahmad, S.

    2005-06-01

    A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.

  17. A Rapid PCR-RFLP Method for Monitoring Genetic Variation among Commercial Mushroom Species

    ERIC Educational Resources Information Center

    Martin, Presley; Muruke, Masoud; Hosea, Kenneth; Kivaisi, Amelia; Zerwas, Nick; Bauerle, Cynthia

    2004-01-01

    We report the development of a simplified procedure for restriction fragment length polymorphism (RFLP) analysis of mushrooms. We have adapted standard molecular techniques to be amenable to an undergraduate laboratory setting in order to allow students to explore basic questions about fungal diversity and relatedness among mushroom species. The…

  18. Applications of Response Surface-Based Methods to Noise Analysis in the Conceptual Design of Revolutionary Aircraft

    NASA Technical Reports Server (NTRS)

    Hill, Geoffrey A.; Olson, Erik D.

    2004-01-01

    Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.

  19. Analysis of low-marbled Hanwoo cow meat aged with different dry-aging methods.

    PubMed

    Lee, Hyun Jung; Choe, Juhui; Kim, Kwan Tae; Oh, Jungmin; Lee, Da Gyeom; Kwon, Ki Moon; Choi, Yang Il; Jo, Cheorun

    2017-12-01

    Different dry-aging methods [traditional dry-aging (TD), simplified dry-aging (SD), and SD in an aging bag (SDB)] were compared to investigate the possible use of SD and/or SDB in practical situations. Sirloins from 48 Hanwoo cows were frozen (Control, 2 days postmortem) or dry-aged for 28 days using the different aging methods and analyzed for chemical composition, total aerobic bacterial count, shear force, inosine 5'-monophosphate (IMP) and free amino acid content, and sensory properties. The difference in chemical composition, total aerobic bacterial count, shear force, IMP, and total free amino acid content were negligible among the 3 dry-aged groups. The SD and SDB showed statistically similar tenderness, flavor, and overall acceptability relative to TD. However, SDB had a relatively higher saleable yield. Both SD and SDB can successfully substitute for TD. However, SDB would be the best option for simplified dry-aging of low-marbled beef with a relatively high saleable yield.

  20. 48 CFR 1313.302 - Purchase orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 1313.302 Section 1313.302 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisitions Methods 1313.302 Purchase orders. ...

  1. 48 CFR 813.302 - Purchase orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 813.302 Section 813.302 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 813.302 Purchase...

  2. 48 CFR 1413.305 - Imprest fund.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...

  3. 48 CFR 1413.305 - Imprest fund.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...

  4. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  5. A Simplified Approach to Job Analysis. Part 2. The Means of Validation

    ERIC Educational Resources Information Center

    Thomas, D. B.; Costley, J. M.

    1969-01-01

    A representative of the Royal Air Force School of Education and a Field Training Advisor to the Civil Air Transport Industry Training Board continue the description of their simplified approach to job analysis. (LY)

  6. Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness

    NASA Technical Reports Server (NTRS)

    Li, Jian; Obrien, T. Kevin

    1995-01-01

    Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.

  7. Sounding rocket thermal analysis techniques applied to GAS payloads. [Get Away Special payloads (STS)

    NASA Technical Reports Server (NTRS)

    Wing, L. D.

    1979-01-01

    Simplified analytical techniques of sounding rocket programs are suggested as a means of bringing the cost of thermal analysis of the Get Away Special (GAS) payloads within acceptable bounds. Particular attention is given to two methods adapted from sounding rocket technology - a method in which the container and payload are assumed to be divided in half vertically by a thermal plane of symmetry, and a method which considers the container and its payload to be an analogous one-dimensional unit having the real or correct container top surface area for radiative heat transfer and a fictitious mass and geometry which model the average thermal effects.

  8. NECAP 4.1: NASA's Energy-Cost Analysis Program fast input manual and example

    NASA Technical Reports Server (NTRS)

    Jensen, R. N.; Miner, D. L.

    1982-01-01

    NASA's Energy-Cost Analysis Program (NECAP) is a powerful computerized method to determine and to minimize building energy consumption. The program calculates hourly heat gain or losses taking into account the building thermal resistance and mass, using hourly weather and a response factor method. Internal temperatures are allowed to vary in accordance with thermostat settings and equipment capacity. NECAP 4.1 has a simplified input procedure and numerous other technical improvements. A very short input method is provided. It is limited to a single zone building. The user must still describe the building's outside geometry and select the type of system to be used.

  9. Simplified power control method for cellular mobile communication

    NASA Astrophysics Data System (ADS)

    Leung, Y. W.

    1994-04-01

    The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.

  10. Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels

    PubMed Central

    Jiang, Nan; Ma, Shaochun

    2015-01-01

    In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels. PMID:28793631

  11. Calculation of Thermally-Induced Displacements in Spherically Domed Ion Engine Grids

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2006-01-01

    An analytical method for predicting the thermally-induced normal and tangential displacements of spherically domed ion optics grids under an axisymmetric thermal loading is presented. A fixed edge support that could be thermally expanded is used for this analysis. Equations for the displacements both normal and tangential to the surface of the spherical shell are derived. A simplified equation for the displacement at the center of the spherical dome is also derived. The effects of plate perforation on displacements and stresses are determined by modeling the perforated plate as an equivalent solid plate with modified, or effective, material properties. Analytical model results are compared to the results from a finite element model. For the solid shell, comparisons showed that the analytical model produces results that closely match the finite element model results. The simplified equation for the normal displacement of the spherical dome center is also found to accurately predict this displacement. For the perforated shells, the analytical solution and simplified equation produce accurate results for materials with low thermal expansion coefficients.

  12. Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels.

    PubMed

    Jiang, Nan; Ma, Shaochun

    2015-10-27

    In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels.

  13. A Simplified Model for the Effect of Weld-Induced Residual Stresses on the Axial Ultimate Strength of Stiffened Plates

    NASA Astrophysics Data System (ADS)

    Chen, Bai-Qiao; Guedes Soares, C.

    2018-03-01

    The present work investigates the compressive axial ultimate strength of fillet-welded steel-plated ship structures subjected to uniaxial compression, in which the residual stresses in the welded plates are calculated by a thermo-elasto-plastic finite element analysis that is used to fit an idealized model of residual stress distribution. The numerical results of ultimate strength based on the simplified model of residual stress show good agreement with those of various methods including the International Association of Classification Societies (IACS) Common Structural Rules (CSR), leading to the conclusion that the simplified model can be effectively used to represent the distribution of residual stresses in steel-plated structures in a wide range of engineering applications. It is concluded that the widths of the tension zones in the welded plates have a quasi-linear behavior with respect to the plate slenderness. The effect of residual stress on the axial strength of the stiffened plate is analyzed and discussed.

  14. A simplified analysis of the multigrid V-cycle as a fast elliptic solver

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Taasan, Shlomo

    1988-01-01

    For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.

  15. Simplified procedure for computing the absorption of sound by the atmosphere

    DOT National Transportation Integrated Search

    2007-10-31

    This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...

  16. A Quick and Easy Simplification of Benzocaine's NMR Spectrum

    NASA Astrophysics Data System (ADS)

    Carpenter, Suzanne R.; Wallace, Richard H.

    2006-04-01

    The preparation of benzocaine is a common experiment used in sophomore-level organic chemistry. Its straightforward procedure and predictable good yields make it ideal for the beginning organic student. Analysis of the product via NMR spectroscopy, however, can be confusing to the novice interpreter. An inexpensive, quick, and effective method for simplifying the NMR spectrum is reported. The method results in a spectrum that is cleanly integrated and more easily interpreted.

  17. New method for designing serial resonant power converters

    NASA Astrophysics Data System (ADS)

    Hinov, Nikolay

    2017-12-01

    In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.

  18. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  19. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  20. Boundedness and convergence of online gradient method with penalty for feedforward neural networks.

    PubMed

    Zhang, Huisheng; Wu, Wei; Liu, Fei; Yao, Mingchen

    2009-06-01

    In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.

  1. GOMA: functional enrichment analysis tool based on GO modules

    PubMed Central

    Huang, Qiang; Wu, Ling-Yun; Wang, Yong; Zhang, Xiang-Sun

    2013-01-01

    Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology. A variety of enrichment analysis tools have been developed in recent years, but most output a long list of significantly enriched terms that are often redundant, making it difficult to extract the most meaningful functions. In this paper, we present GOMA, a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules. With this method, we systematically revealed functional GO modules, i.e., groups of functionally similar GO terms, via an optimization model and then ranked them by enrichment scores. Our new method simplifies enrichment analysis results by reducing redundancy, thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results. PMID:23237213

  2. Adopting exergy analysis for use in aerospace

    NASA Astrophysics Data System (ADS)

    Hayes, David; Lone, Mudassir; Whidborne, James F.; Camberos, José; Coetzee, Etienne

    2017-08-01

    Thermodynamic analysis methods, based on an exergy metric, have been developed to improve system efficiency of traditional heat driven systems such as ground based power plants and aircraft propulsion systems. However, in more recent years interest in the topic has broadened to include applying these second law methods to the field of aerodynamics and complete aerospace vehicles. Work to date is based on highly simplified structures, but such a method could be shown to have benefit to the highly conservative and risk averse commercial aerospace sector. This review justifies how thermodynamic exergy analysis has the potential to facilitate a breakthrough in the optimization of aerospace vehicles based on a system of energy systems, through studying the exergy-based multidisciplinary design of future flight vehicles.

  3. A Simplified Diagnostic Method for Elastomer Bond Durability

    NASA Technical Reports Server (NTRS)

    White, Paul

    2009-01-01

    A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.

  4. Quantification of 18F-Fluoride Kinetics: Evaluation of Simplified Methods.

    PubMed

    Raijmakers, Pieter; Temmerman, Olivier P P; Saridin, Carrol P; Heyligers, Ide C; Becking, Alfred G; van Lingen, Arthur; Lammertsma, Adriaan A

    2014-07-01

    (18)F-fluoride PET is a promising noninvasive method for measuring bone metabolism and bone blood flow. The purpose of this study was to assess the performance of various clinically useful simplified methods by comparing them with full kinetic analysis. In addition, the validity of deriving bone blood flow from K1 of (18)F-fluoride was investigated using (15)O-H2O as a reference. Twenty-two adults (mean age ± SD, 44.8 ± 25.2 y), including 16 patients scheduled for bone surgery and 6 healthy volunteers, were studied. All patients underwent dynamic (15)O-H2O and (18)F-fluoride scans before surgery. Ten of these patients had serial PET measurements before and at 2 time points after local bone surgery. During all PET scans, arterial blood was monitored continuously. (18)F-fluoride data were analyzed using nonlinear regression (NLR) and several simplified methods (Patlak and standardized uptake value [SUV]). SUV was evaluated for different time intervals after injection and after normalizing to body weight, lean body mass, and body surface area, and simplified measurements were compared with NLR results. In addition, changes in SUV and Patlak-derived fluoride influx rate (Ki) after surgery were compared with corresponding changes in NLR-derived Ki. Finally, (18)F-fluoride K1 was compared with bone blood flow derived from (15)O-H2O data, using the standard single-tissue-compartment model. K1 of (18)F-fluoride correlated with measured blood flow, but the correlation coefficient was relatively low (r = 0.35, P < 0.001). NLR resulted in a mean Ki of 0.0160 ± 0.0122, whereas Patlak analysis, for the interval 10-60 min after injection, resulted in an almost-identical mean Ki of 0.0161 ± 0.0117. The Patlak-derived Ki, for 10-60 min after injection, showed a high correlation with the NLR-derived Ki (r = 0.976). The highest correlation between Ki and lean body mass-normalized SUV was found for the interval 50-60 min (r = 0.958). Finally, changes in SUV correlated significantly with those in Ki (r = 0.97). The present data support the use of both Patlak and SUV for assessing fluoride kinetics in humans. However, (18)F-fluoride PET has only limited accuracy in monitoring bone blood flow. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  5. A simplified fuel control approach for low cost aircraft gas turbines

    NASA Technical Reports Server (NTRS)

    Gold, H.

    1973-01-01

    Reduction in the complexity of gas turbine fuel controls without loss of control accuracy, reliability, or effectiveness as a method for reducing engine costs is discussed. A description and analysis of hydromechanical approach are presented. A computer simulation of the control mechanism is given and performance of a physical model in engine test is reported.

  6. Development and Evaluation of an Analytical Method for the Determination of Total Atmospheric Mercury. Final Report.

    ERIC Educational Resources Information Center

    Chase, D. L.; And Others

    Total mercury in ambient air can be collected in iodine monochloride, but the subsequent analysis is relatively complex and tedious, and contamination from reagents and containers is a problem. A sliver wool collector, preceded by a catalytic pyrolysis furnace, gives good recovery of mercury and simplifies the analytical step. An instrumental…

  7. A simplified method for determining reactive rate parameters for reaction ignition and growth in explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.

  8. Analysis, Verification, and Application of Equations and Procedures for Design of Exhaust-pipe Shrouds

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Herman H.; Wcislo, Chester R.; Dexter, Howard E.

    1947-01-01

    Investigations were made to develop a simplified method for designing exhaust-pipe shrouds to provide desired or maximum cooling of exhaust installations. Analysis of heat exchange and pressure drop of an adequate exhaust-pipe shroud system requires equations for predicting design temperatures and pressure drop on cooling air side of system. Present experiments derive such equations for usual straight annular exhaust-pipe shroud systems for both parallel flow and counter flow. Equations and methods presented are believed to be applicable under certain conditions to the design of shrouds for tail pipes of jet engines.

  9. Single-tube tetradecaplex panel of highly polymorphic microsatellite markers < 1 Mb from F8 for simplified preimplantation genetic diagnosis of hemophilia A.

    PubMed

    Zhao, M; Chen, M; Tan, A S C; Cheah, F S H; Mathew, J; Wong, P C; Chong, S S

    2017-07-01

    Essentials Preimplantation genetic diagnosis (PGD) of severe hemophilia A relies on linkage analysis. Simultaneous multi-marker screening can simplify selection of informative markers in a couple. We developed a single-tube tetradecaplex panel of polymorphic markers for hemophilia A PGD use. Informative markers can be used for linkage analysis alone or combined with mutation detection. Background It is currently not possible to perform single-cell preimplantation genetic diagnosis (PGD) to directly detect the common inversion mutations of the factor VIII (F8) gene responsible for severe hemophilia A (HEMA). As such, PGD for such inversion carriers relies on indirect analysis of linked polymorphic markers. Objectives To simplify linkage-based PGD of HEMA, we aimed to develop a panel of highly polymorphic microsatellite markers located near the F8 gene that could be simultaneously genotyped in a multiplex-PCR reaction. Methods We assessed the polymorphism of various microsatellite markers located ≤ 1 Mb from F8 in 177 female subjects. Highly polymorphic markers were selected for co-amplification with the AMELX/Y indel dimorphism in a single-tube reaction. Results Thirteen microsatellite markers located within 0.6 Mb of F8 were successfully co-amplified with AMELX/Y in a single-tube reaction. Observed heterozygosities of component markers ranged from 0.43 to 0.84, and ∼70-80% of individuals were heterozygous for ≥ 5 markers. The tetradecaplex panel successfully identified fully informative markers in a couple interested in PGD for HEMA because of an intragenic F8 point mutation, with haplotype phasing established through a carrier daughter. In-vitro fertilization (IVF)-PGD involved single-tube co-amplification of fully informative markers with AMELX/Y and the mutation-containing F8 amplicon, followed by microsatellite analysis and amplicon mutation-site minisequencing analysis. Conclusions The single-tube multiplex-PCR format of this highly polymorphic microsatellite marker panel simplifies identification and selection of informative markers for linkage-based PGD of HEMA. Informative markers can also be easily co-amplified with mutation-containing F8 amplicons for combined mutation detection and linkage analysis. © 2017 International Society on Thrombosis and Haemostasis.

  10. Quantitative characterization of galectin-3-C affinity mass spectrometry measurements: Comprehensive data analysis, obstacles, shortcuts and robustness.

    PubMed

    Haramija, Marko; Peter-Katalinić, Jasna

    2017-10-30

    Affinity mass spectrometry (AMS) is an emerging tool in the field of the study of protein•carbohydrate complexes. However, experimental obstacles and data analysis are preventing faster integration of AMS methods into the glycoscience field. Here we show how analysis of direct electrospray ionization mass spectrometry (ESI-MS) AMS data can be simplified for screening purposes, even for complex AMS spectra. A direct ESI-MS assay was tested in this study and binding data for the galectin-3C•lactose complex were analyzed using a comprehensive and simplified data analysis approach. In the comprehensive data analysis approach, noise, all protein charge states, alkali ion adducts and signal overlap were taken into account. In a simplified approach, only the intensities of the fully protonated free protein and the protein•carbohydrate complex for the main protein charge state were taken into account. In our study, for high intensity signals, noise was negligible, sodiated protein and sodiated complex signals cancelled each other out when calculating the K d value, and signal overlap influenced the Kd value only to a minor extent. Influence of these parameters on low intensity signals was much higher. However, low intensity protein charge states should be avoided in quantitative AMS analyses due to poor ion statistics. The results indicate that noise, alkali ion adducts, signal overlap, as well as low intensity protein charge states, can be neglected for preliminary experiments, as well as in screening assays. One comprehensive data analysis performed as a control should be sufficient to validate this hypothesis for other binding systems as well. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  12. Practical Aspects of Stabilized FEM Discretizations of Nonlinear Conservation Law Systems with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Saini, Subhash (Technical Monitor)

    1999-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.

  13. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  14. A Wavelet Packet Transform Inspired Method of Neutron-Gamma Discrimination

    NASA Astrophysics Data System (ADS)

    Shippen, David I.; Joyce, Malcolm J.; Aspinall, Michael D.

    2010-10-01

    A Simplified Digital Charge Collection (SDCC) method of discrimination between neutron and gamma pulses in an organic scintillator is presented and compared to the Pulse Gradient Analysis (PGA) discrimination method. Data used in this research were gathered from events arising from the 7Li(p,n)7Be reaction detected by an EJ-301 organic liquid scintillator recorded with a fast digital oscilloscope. Time-of-Flight (TOF) data were also recorded and used as a second means of identification. The SDCC method is found to improve on the figure of merit (FOM) given by PGA method at the equivalent sampling rate.

  15. Simplified planar model of a car steering system with rack and pinion and McPherson suspension

    NASA Astrophysics Data System (ADS)

    Knapczyk, J.; Kucybała, P.

    2016-09-01

    The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.

  16. Existence and stability, and discrete BB and rank conditions, for general mixed-hybrid finite elements in elasticity

    NASA Technical Reports Server (NTRS)

    Xue, W.-M.; Atluri, S. N.

    1985-01-01

    In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.

  17. The Kineticist’s Workbench: Combining Symbolic and Numerical Methods in the Simulation of Chemical Reaction Mechanisms

    DTIC Science & Technology

    1991-06-01

    algorithms (for the analysis of mechanisms), traditional numerical simulation methods, and algorithms that examine the (continued on back) 14. SUBJECT TERMS ...7540-01-280.S500 )doo’c -O• 98 (; : 89) 2YB 󈧆 Block 13 continued: simulation results and reinterpret them in qualitative terms . Moreover...simulation results and reinterpret them in qualitative terms . Moreover, the Workbench can use symbolic procedures to help guide or simplify the task

  18. 77 FR 73965 - Allocation of Costs Under the Simplified Methods; Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [REG-126770-06] RIN 1545-BG07 Allocation of Costs Under the Simplified Methods; Hearing AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of public hearing on notice proposed rulemaking. SUMMARY: This document provides notice of...

  19. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  20. 77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-19

    ... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...

  1. 48 CFR 13.005 - List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition threshold. 13.005 Section 13.005 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES...

  2. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  3. Estimation of cardiac reserve by peak power: validation and initial application of a simplified index

    NASA Technical Reports Server (NTRS)

    Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.

    1999-01-01

    OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection fraction or rate-pressure product.

  4. SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records

    USGS Publications Warehouse

    Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.

    2013-01-01

    This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.

  5. Experimental and Numerical Analysis of Narrowband Coherent Rayleigh-Brillouin Scattering in Atomic and Molecular Species (Pre Print)

    DTIC Science & Technology

    2012-02-01

    use of polar gas species. While current simplified models have adequately predicted CRS and CRBS line shapes for a wide variety of cases, multiple ...published simplified models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas... models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas properties

  6. Direct digestion of proteins in living cells into peptides for proteomic analysis.

    PubMed

    Chen, Qi; Yan, Guoquan; Gao, Mingxia; Zhang, Xiangmin

    2015-01-01

    To analyze the proteome of an extremely low number of cells or even a single cell, we established a new method of digesting whole cells into mass-spectrometry-identifiable peptides in a single step within 2 h. Our sampling method greatly simplified the processes of cell lysis, protein extraction, protein purification, and overnight digestion, without compromising efficiency. We used our method to digest hundred-scale cells. As far as we know, there is no report of proteome analysis starting directly with as few as 100 cells. We identified an average of 109 proteins from 100 cells, and with three replicates, the number of proteins rose to 204. Good reproducibility was achieved, showing stability and reliability of the method. Gene Ontology analysis revealed that proteins in different cellular compartments were well represented.

  7. Dynamic Pressure Distribution due to Horizontal Acceleration in Spherical LNG Tank with Cylindrical Central Part

    NASA Astrophysics Data System (ADS)

    Ko, Dae-Eun; Shin, Sang-Hoon

    2017-11-01

    Spherical LNG tanks having many advantages such as structural safety are used as a cargo containment system of LNG carriers. However, it is practically difficult to fabricate perfectly spherical tanks of different sizes in the yard. The most effective method of manufacturing LNG tanks of various capacities is to insert a cylindrical part at the center of existing spherical tanks. While a simplified high-precision analysis method for the initial design of the spherical tanks has been developed for both static and dynamic loads, in the case of spherical tanks with a cylindrical central part, the analysis method available only considers static loads. The purpose of the present study is to derive the dynamic pressure distribution due to horizontal acceleration, which is essential for developing an analysis method that considers dynamic loads as well.

  8. Methodological problems in the method used by IQWiG within early benefit assessment of new pharmaceuticals in Germany.

    PubMed

    Herpers, Matthias; Dintsios, Charalabos-Markos

    2018-04-25

    The decision matrix applied by the Institute for Quality and Efficiency in Health Care (IQWiG) for the quantification of added benefit within the early benefit assessment of new pharmaceuticals in Germany with its nine fields is quite complex and could be simplified. Furthermore, the method used by IQWiG is subject to manifold criticism: (1) it is implicitly weighting endpoints differently in its assessments favoring overall survival and, thereby, drug interventions in fatal diseases, (2) it is assuming that two pivotal trials are available when assessing the dossiers submitted by the pharmaceutical manufacturers, leading to far-reaching implications with respect to the quantification of added benefit, and, (3) it is basing the evaluation primarily on dichotomous endpoints and consequently leading to an information loss of usable evidence. To investigate if criticism is justified and to propose methodological adaptations. Analysis of the available dossiers up to the end of 2016 using statistical tests and multinomial logistic regression and simulations. It was shown that due to power losses, the method does not ensure that results are statistically valid and outcomes of the early benefit assessment may be compromised, though evidence on favoring overall survival remains unclear. Modifications, however, of the IQWiG method are possible to address the identified problems. By converging with the approach of approval authorities for confirmatory endpoints, the decision matrix could be simplified and the analysis method could be improved, to put the results on a more valid statistical basis.

  9. COVD-QOL questionnaire: An adaptation for school vision screening using Rasch analysis

    PubMed Central

    Abu Bakar, Nurul Farhana; Ai Hong, Chen; Pik Pin, Goh

    2012-01-01

    Purpose To adapt the College of Optometrist in Vision Development (COVD-QOL) questionnaire as a vision screening tool for primary school children. Methods An interview session was conducted with children, teachers or guardians regarding visual symptoms of 88 children (45 from special education classes and 43 from mainstream classes) in government primary schools. Data was assessed for response categories, fit items (infit/outfit: 0.6–1.4) and separation reliability (item/person: 0.80). The COVD-QOL questionnaire results were compared with vision assessment in identifying three categories of vision disorders: reduce visual acuity, accommodative response anomaly and convergence insufficiency. Analysis on the screening performance using the simplified version of the questionnaire was evaluated based on receiver-operating characteristic analysis for detection of any type of target conditions for both types of classes. Predictive validity analysis was used a Spearman rank correlation (>0.3). Results Two of the response categories were underutilized and therefore collapsed to the adjacent category and items were reduced to 14. Item separation reliability for the simplified version of the questionnaire was acceptable (0.86) but the person separation reliability was inadequate for special education classes (0.79) similar to mainstream classes (0.78). The discriminant cut-off score of 9 (mainstream classes) and 3 (special education classes) from the 14 items provided sensitivity and specificity of (65% and 54%) and (78% and 80%) with Spearman rank correlation of 0.16 and 0.40 respectively. Conclusion The simplified version of COVD-QOL questionnaire (14-items) performs adequately among children in special education classes suggesting its suitability as a vision screening tool.

  10. Heterogeneity of Metazoan Cells and Beyond: To Integrative Analysis of Cellular Populations at Single-Cell Level.

    PubMed

    Barteneva, Natasha S; Vorobjev, Ivan A

    2018-01-01

    In this paper, we review some of the recent advances in cellular heterogeneity and single-cell analysis methods. In modern research of cellular heterogeneity, there are four major approaches: analysis of pooled samples, single-cell analysis, high-throughput single-cell analysis, and lately integrated analysis of cellular population at a single-cell level. Recently developed high-throughput single-cell genetic analysis methods such as RNA-Seq require purification step and destruction of an analyzed cell often are providing a snapshot of the investigated cell without spatiotemporal context. Correlative analysis of multiparameter morphological, functional, and molecular information is important for differentiation of more uniform groups in the spectrum of different cell types. Simplified distributions (histograms and 2D plots) can underrepresent biologically significant subpopulations. Future directions may include the development of nondestructive methods for dissecting molecular events in intact cells, simultaneous correlative cellular analysis of phenotypic and molecular features by hybrid technologies such as imaging flow cytometry, and further progress in supervised and non-supervised statistical analysis algorithms.

  11. Wilsonian methods of concept analysis: a critique.

    PubMed

    Hupcey, J E; Morse, J M; Lenz, E R; Tasón, M C

    1996-01-01

    Wilsonian methods of concept analysis--that is, the method proposed by Wilson and Wilson-derived methods in nursing (as described by Walker and Avant; Chinn and Kramer [Jacobs]; Schwartz-Barcott and Kim; and Rodgers)--are discussed and compared in this article. The evolution and modifications of Wilson's method in nursing are described and research that has used these methods, assessed. The transformation of Wilson's method is traced as each author has adopted his techniques and attempted to modify the method to correct for limitations. We suggest that these adaptations and modifications ultimately erode Wilson's method. Further, the Wilson-derived methods have been overly simplified and used by nurse researchers in a prescriptive manner, and the results often do not serve the purpose of expanding nursing knowledge. We conclude that, considering the significance of concept development for the nursing profession, the development of new methods and a means for evaluating conceptual inquiry must be given priority.

  12. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  13. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  14. Mining dynamic noteworthy functions in software execution sequences.

    PubMed

    Zhang, Bing; Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong

    2017-01-01

    As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely.

  15. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  16. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  17. 37Cl/35Cl isotope ratio analysis in perchlorate by ion chromatography/multi collector -ICPMS: Analytical performance and implication for biodegradation studies.

    PubMed

    Zakon, Yevgeni; Ronen, Zeev; Halicz, Ludwik; Gelman, Faina

    2017-10-01

    In the present study we propose a new analytical method for 37 Cl/ 35 Cl analysis in perchlorate by Ion Chromatography(IC) coupled to Multicollector Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS). The accuracy of the analytical method was validated by analysis of international perchlorate standard materials USGS-37 and USGS -38; analytical precision better than ±0.4‰ was achieved. 37 Cl/ 35 Cl isotope ratio analysis in perchlorate during laboratory biodegradation experiment with microbial cultures enriched from the contaminated soil in Israel resulted in isotope enrichment factor ε 37 Cl = -13.3 ± 1‰, which falls in the range reported previously for perchlorate biodegradation by pure microbial cultures. The proposed analytical method may significantly simplify the procedure for isotope analysis of perchlorate which is currently applied in environmental studies. Copyright © 2017. Published by Elsevier Ltd.

  18. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    NASA Astrophysics Data System (ADS)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  19. Simplified modelling and analysis of a rotating Euler-Bernoulli beam with a single cracked edge

    NASA Astrophysics Data System (ADS)

    Yashar, Ahmed; Ferguson, Neil; Ghandchi-Tehrani, Maryam

    2018-04-01

    The natural frequencies and mode shapes of the flapwise and chordwise vibrations of a rotating cracked Euler-Bernoulli beam are investigated using a simplified method. This approach is based on obtaining the lateral deflection of the cracked rotating beam by subtracting the potential energy of a rotating massless spring, which represents the crack, from the total potential energy of the intact rotating beam. With this new method, it is assumed that the admissible function which satisfies the geometric boundary conditions of an intact beam is valid even in the presence of a crack. Furthermore, the centrifugal stiffness due to rotation is considered as an additional stiffness, which is obtained from the rotational speed and the geometry of the beam. Finally, the Rayleigh-Ritz method is utilised to solve the eigenvalue problem. The validity of the results is confirmed at different rotational speeds, crack depth and location by comparison with solid and beam finite element model simulations. Furthermore, the mode shapes are compared with those obtained from finite element models using a Modal Assurance Criterion (MAC).

  20. Calibration method and apparatus for measuring the concentration of components in a fluid

    DOEpatents

    Durham, M.D.; Sagan, F.J.; Burkhardt, M.R.

    1993-12-21

    A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid. 7 figures.

  1. Calibration method and apparatus for measuring the concentration of components in a fluid

    DOEpatents

    Durham, Michael D.; Sagan, Francis J.; Burkhardt, Mark R.

    1993-01-01

    A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid.

  2. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  3. A simplified procedure for GC/C/IRMS analysis of underivatized 19-norandrosterone in urine following HPLC purification.

    PubMed

    de la Torre, Xavier; Colamonici, Cristiana; Curcio, Davide; Molaioni, Francesco; Pizzardi, Marta; Botrè, Francesco

    2011-04-01

    Nandrolone and/or its precursors are included in the World Anti-doping Agency (WADA) list of forbidden substances and methods and as such their use is banned in sport. 19-Norandrosterone (19-NA) the main metabolite of these compounds can also be produced endogenously. The need to establish the origin of 19-NA in human urine samples obliges the antidoping laboratories to use isotope ratio mass spectrometry (IRMS) coupled to gas chromatography (GC/C/IRMS). In this work a simple liquid chromatographic method without any additional derivatization step is proposed, allowing to drastically simplify the urine pretreatment procedure, leading to extracts free of interferences permitting precise and accurate IRMS analysis. The purity of the extracts was verified by parallel analysis by gas chromatography coupled to mass spectrometry with GC conditions identical to those of the GC/C/IRMS assay. The method has been validated according to ISO17025 requirements (within assay precision of ±0.3‰ and between assay precision of ±0.4‰). The method has been tested with samples obtained after the administration of synthetic 19-norandrostenediol and samples collected during pregnancy where 19-NA is known to be produced endogenously. Twelve drugs and synthetic standards able to produce through metabolism 19-NA have shown to present δ(13)C values around -29‰ being quite homogeneous (-28.8±1.5; mean±standard deviation) while endogenously produced 19-NA has shown values comparable to other endogenous produced steroids in the range -21 to -24‰ as already reported. The efficacy of the method was tested on real samples from routine antidoping analyses. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Adduct simplification in the analysis of cyanobacterial toxins by matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Howard, Karen L; Boyer, Gregory L

    2007-01-01

    A novel method for simplifying adduct patterns to improve the detection and identification of peptide toxins using matrix-assisted laser desorption/ionization (MALDI) time-of-flight (TOF) mass spectrometry is presented. Addition of 200 microM zinc sulfate heptahydrate (ZnSO(4) . 7H(2)O) to samples prior to spotting on the target enhances detection of the protonated molecule while suppressing competing adducts. This produces a highly simplified spectrum with the potential to enhance quantitative analysis, particularly for complex samples. The resulting improvement in total signal strength and reduction in the coefficient of variation (from 31.1% to 5.2% for microcystin-LR) further enhance the potential for sensitive and accurate quantitation. Other potential additives tested, including 18-crown-6 ether, alkali metal salts (lithium chloride, sodium chloride, potassium chloride), and other transition metal salts (silver chloride, silver nitrate, copper(II) nitrate, copper(II) sulfate, zinc acetate), were unable to achieve comparable results. Application of this technique to the analysis of several microcystins, potent peptide hepatotoxins from cyanobacteria, is illustrated. Copyright (c) 2007 John Wiley & Sons, Ltd.

  5. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  6. An improved loopless mounting method for cryocrystallography

    NASA Astrophysics Data System (ADS)

    Qi, Jian-Xun; Jiang, Fan

    2010-01-01

    Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments.

  7. Evaluation of different methods to estimate daily reference evapotranspiration in ungauged basins in Southern Brazil

    NASA Astrophysics Data System (ADS)

    Ribeiro Fontoura, Jessica; Allasia, Daniel; Herbstrith Froemming, Gabriel; Freitas Ferreira, Pedro; Tassi, Rutineia

    2016-04-01

    Evapotranspiration is a key process of hydrological cycle and a sole term that links land surface water balance and land surface energy balance. Due to the higher information requirements of the Penman-Monteith method and the existing data uncertainty, simplified empirical methods for calculating potential and actual evapotranspiration are widely used in hydrological models. This is especially important in Brazil, where the monitoring of meteorological data is precarious. In this study were compared different methods for estimating evapotranspiration for Rio Grande do Sul, the Southernmost State of Brazil, aiming to suggest alternatives to the recommended method (Penman-Monteith-FAO 56) for estimate daily reference evapotranspiration (ETo) when meteorological data is missing or not available. The input dataset included daily and hourly-observed data from conventional and automatic weather stations respectively maintained by the National Weather Institute of Brazil (INMET) from the period of 1 January 2007 to 31 January 2010. Dataset included maximum temperature (Tmax, °C), minimum temperature (Tmin, °C), mean relative humidity (%), wind speed at 2 m height (u2, m s-1), daily solar radiation (Rs, MJ m- 2) and atmospheric pressure (kPa) that were grouped at daily time-step. Was tested the Food and Agriculture Organization of the United Nations (FAO) Penman-Monteith method (PM) at its full form, against PM assuming missing several variables not normally available in Brazil in order to calculate daily reference ETo. Missing variables were estimated as suggested in FAO56 publication or from climatological means. Furthermore, PM was also compared against the following simplified empirical methods: Hargreaves-Samani, Priestley-Taylor, Mccloud, McGuiness-Bordne, Romanenko, Radiation-Temperature, Tanner-Pelton. The statistical analysis indicates that even if just Tmin and Tmax are available, it is better to use PM estimating missing variables from syntetic data than simplified empirical methods evaluated except for Tanner-Pelton and Priestley-Taylor.

  8. Research on simplified parametric finite element model of automobile frontal crash

    NASA Astrophysics Data System (ADS)

    Wu, Linan; Zhang, Xin; Yang, Changhai

    2018-05-01

    The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.

  9. A simplified parsimonious higher order multivariate Markov chain model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  10. FDDO and DSMC analyses of rarefied gas flow through 2D nozzles

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.

    1992-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.

  11. A simplified method of evaluating the stress wave environment of internal equipment

    NASA Technical Reports Server (NTRS)

    Colton, J. D.; Desmond, T. P.

    1979-01-01

    A simplified method called the transfer function technique (TFT) was devised for evaluating the stress wave environment in a structure containing internal equipment. The TFT consists of following the initial in-plane stress wave that propagates through a structure subjected to a dynamic load and characterizing how the wave is altered as it is transmitted through intersections of structural members. As a basis for evaluating the TFT, impact experiments and detailed stress wave analyses were performed for structures with two or three, or more members. Transfer functions that relate the wave transmitted through an intersection to the incident wave were deduced from the predicted wave response. By sequentially applying these transfer functions to a structure with several intersections, it was found that the environment produced by the initial stress wave propagating through the structure can be approximated well. The TFT can be used as a design tool or as an analytical tool to determine whether a more detailed wave analysis is warranted.

  12. A simplified model of all-sky artificial sky glow derived from VIIRS Day/Night band data

    NASA Astrophysics Data System (ADS)

    Duriscoe, Dan M.; Anderson, Sharolyn J.; Luginbuhl, Christian B.; Baugh, Kimberly E.

    2018-07-01

    We present a simplified method using geographic analysis tools to predict the average artificial luminance over the hemisphere of the night sky, expressed as a ratio to the natural condition. The VIIRS Day/Night Band upward radiance data from the Suomi NPP orbiting satellite was used for input to the model. The method is based upon a relation between sky glow brightness and the distance from the observer to the source of upward radiance. This relationship was developed using a Garstang radiative transfer model with Day/Night Band data as input, then refined and calibrated with ground-based all-sky V-band photometric data taken under cloudless and low atmospheric aerosol conditions. An excellent correlation was found between observed sky quality and the predicted values from the remotely sensed data. Thematic maps of large regions of the earth showing predicted artificial V-band sky brightness may be quickly generated with modest computing resources. We have found a fast and accurate method based on previous work to model all-sky quality. We provide limitations to this method. The proposed model meets requirements needed by decision makers and land managers of an easy to interpret and understand metric of sky quality.

  13. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  14. Nose-to-tail analysis of an airbreathing hypersonic vehicle using an in-house simplified tool

    NASA Astrophysics Data System (ADS)

    Piscitelli, Filomena; Cutrone, Luigi; Pezzella, Giuseppe; Roncioni, Pietro; Marini, Marco

    2017-07-01

    SPREAD (Scramjet PREliminary Aerothermodynamic Design) is a simplified, in-house method developed by CIRA (Italian Aerospace Research Centre), able to provide a preliminary estimation of the performance of engine/aeroshape for airbreathing configurations. It is especially useful for scramjet engines, for which the strong coupling between the aerothermodynamic (external) and propulsive (internal) flow fields requires real-time screening of several engine/aeroshape configurations and the identification of the most promising one/s with respect to user-defined constraints and requirements. The outcome of this tool defines the base-line configuration for further design analyses with more accurate tools, e.g., CFD simulations and wind tunnel testing. SPREAD tool has been used to perform the nose-to-tail analysis of the LAPCAT-II Mach 8 MR2.4 vehicle configuration. The numerical results demonstrate SPREAD capability to quickly predict reliable values of aero-propulsive balance (i.e., net-thrust) and aerodynamic efficiency in a pre-design phase.

  15. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro

    NASA Technical Reports Server (NTRS)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman

    1996-01-01

    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  16. 29 CFR 2520.104-48 - Alternative method of compliance for model simplified employee pensions-IRS Form 5305-SEP.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...

  17. A simplified approach for slope stability analysis of uncontrolled waste dumps.

    PubMed

    Turer, Dilek; Turer, Ahmet

    2011-02-01

    Slope stability analysis of municipal solid waste has always been problematic because of the heterogeneous nature of the waste materials. The requirement for large testing equipment in order to obtain representative samples has identified the need for simplified approaches to obtain the unit weight and shear strength parameters of the waste. In the present study, two of the most recently published approaches for determining the unit weight and shear strength parameters of the waste have been incorporated into a slope stability analysis using the Bishop method to prepare slope stability charts. The slope stability charts were prepared for uncontrolled waste dumps having no liner and leachate collection systems with pore pressure ratios of 0, 0.1, 0.2, 0.3, 0.4 and 0.5, considering the most critical slip surface passing through the toe of the slope. As the proposed slope stability charts were prepared by considering the change in unit weight as a function of height, they reflect field conditions better than accepting a constant unit weight approach in the stability analysis. They also streamline the selection of slope or height as a function of the desired factor of safety.

  18. Clinical review: Continuous and simplified electroencephalography to monitor brain recovery after cardiac arrest

    PubMed Central

    2013-01-01

    There has been a dramatic change in hospital care of cardiac arrest survivors in recent years, including the use of target temperature management (hypothermia). Clinical signs of recovery or deterioration, which previously could be observed, are now concealed by sedation, analgesia, and muscle paralysis. Seizures are common after cardiac arrest, but few centers can offer high-quality electroencephalography (EEG) monitoring around the clock. This is due primarily to its complexity and lack of resources but also to uncertainty regarding the clinical value of monitoring EEG and of treating post-ischemic electrographic seizures. Thanks to technical advances in recent years, EEG monitoring has become more available. Large amounts of EEG data can be linked within a hospital or between neighboring hospitals for expert opinion. Continuous EEG (cEEG) monitoring provides dynamic information and can be used to assess the evolution of EEG patterns and to detect seizures. cEEG can be made more simple by reducing the number of electrodes and by adding trend analysis to the original EEG curves. In our version of simplified cEEG, we combine a reduced montage, displaying two channels of the original EEG, with amplitude-integrated EEG trend curves (aEEG). This is a convenient method to monitor cerebral function in comatose patients after cardiac arrest but has yet to be validated against the gold standard, a multichannel cEEG. We recently proposed a simplified system for interpreting EEG rhythms after cardiac arrest, defining four major EEG patterns. In this topical review, we will discuss cEEG to monitor brain function after cardiac arrest in general and how a simplified cEEG, with a reduced number of electrodes and trend analysis, may facilitate and improve care. PMID:23876221

  19. Numerical simulation of rarefied gas flow through a slit

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong

    1990-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.

  20. SOLCOST. Solar Hot Water Handbook. A Simplified Design Method for Sizing and Costing Residential and Commercial Solar Service Hot Water Systems. Second Edition.

    ERIC Educational Resources Information Center

    Energy Research and Development Administration, Washington, DC. Div. of Solar Energy.

    This pamphlet offers a preview of information services available from Solcost, a research and development project. The first section explains that Solcost calculates system and costs performance for solar heated and cooled new and retrofit constructions, such as residential buildings and single zone commercial buildings. For a typical analysis,…

  1. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  2. Mechanical modeling for magnetorheological elastomer isolators based on constitutive equations and electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Dong, Xufeng; Li, Luyu; Ou, Jinping

    2018-06-01

    As constitutive models are too complicated and existing mechanical models lack universality, these models are beyond satisfaction for magnetorheological elastomer (MRE) devices. In this article, a novel universal method is proposed to build concise mechanical models. Constitutive model and electromagnetic analysis were applied in this method to ensure universality, while a series of derivations and simplifications were carried out to obtain a concise formulation. To illustrate the proposed modeling method, a conical MRE isolator was introduced. Its basic mechanical equations were built based on equilibrium, deformation compatibility, constitutive equations and electromagnetic analysis. An iteration model and a highly efficient differential equation editor based model were then derived to solve the basic mechanical equations. The final simplified mechanical equations were obtained by re-fitting the simulations with a novel optimal algorithm. In the end, verification test of the isolator has proved the accuracy of the derived mechanical model and the modeling method.

  3. Nouvelles techniques pratiques pour la modelisation du comportement dynamique des systèmes eau-structure

    NASA Astrophysics Data System (ADS)

    Miquel, Benjamin

    The dynamic or seismic behavior of hydraulic structures is, as for conventional structures, essential to assure protection of human lives. These types of analyses also aim at limiting structural damage caused by an earthquake to prevent rupture or collapse of the structure. The particularity of these hydraulic structures is that not only the internal displacements are caused by the earthquake, but also by the hydrodynamic loads resulting from fluid-structure interaction. This thesis reviews the existing complex and simplified methods to perform such dynamic analysis for hydraulic structures. For the complex existing methods, attention is placed on the difficulties arising from their use. Particularly, interest is given in this work on the use of transmitting boundary conditions to simulate the semi infinity of reservoirs. A procedure has been developed to estimate the error that these boundary conditions can introduce in finite element dynamic analysis. Depending on their formulation and location, we showed that they can considerably affect the response of such fluid-structure systems. For practical engineering applications, simplified procedures are still needed to evaluate the dynamic behavior of structures in contact with water. A review of the existing simplified procedures showed that these methods are based on numerous simplifications that can affect the prediction of the dynamic behavior of such systems. One of the main objectives of this thesis has been to develop new simplified methods that are more accurate than those existing. First, a new spectral analysis method has been proposed. Expressions for the fundamental frequency of fluid-structure systems, key parameter of spectral analysis, have been developed. We show that this new technique can easily be implemented in a spreadsheet or program, and that its calculation time is near instantaneous. When compared to more complex analytical or numerical method, this new procedure yields excellent prediction of the dynamic behavior of fluid-structure systems. Spectral analyses ignore the transient and oscillatory nature of vibrations. When such dynamic analyses show that some areas of the studied structure undergo excessive stresses, time history analyses allow a better estimate of the extent of these zones as well as a time notion of these excessive stresses. Furthermore, the existing spectral analyses methods for fluid-structure systems account only for the static effect of higher modes. Thought this can generally be sufficient for dams, for flexible structures the dynamic effect of these modes should be accounted for. New methods have been developed for fluid-structure systems to account for these observations as well as the flexibility of foundations. A first method was developed to study structures in contact with one or two finite or infinite water domains. This new technique includes flexibility of structures and foundations as well as the dynamic effect of higher vibration modes and variations of the levels of the water domains. Extension of this method was performed to study beam structures in contact with fluids. These new developments have also allowed extending existing analytical formulations of the dynamic properties of a dry beam to a new formulation that includes effect of fluid-structure interaction. The method yields a very good estimate of the dynamic behavior of beam-fluid systems or beam like structures in contact with fluid. Finally, a Modified Accelerogram Method (MAM) has been developed to modify the design earthquake into a new accelerogram that directly accounts for the effect of fluid-structure interaction. This new accelerogram can therefore be applied directly to the dry structure (i.e. without water) in order to calculate the dynamic response of the fluid-structure system. This original technique can include numerous parameters that influence the dynamic response of such systems and allows to treat analytically the fluid-structure interaction while keeping the advantages of finite element modeling.

  4. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    PubMed Central

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396

  5. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  6. Report on FY15 alloy 617 code rules development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sham, Sam; Jetter, Robert I; Hollinger, Greg

    2015-09-01

    Due to its strength at very high temperatures, up to 950°C (1742°F), Alloy 617 is the reference construction material for structural components that operate at or near the outlet temperature of the very high temperature gas-cooled reactors. However, the current rules in the ASME Section III, Division 5 Subsection HB, Subpart B for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 650°C (1200°F) (Corum and Brass, Proceedings of ASME 1991 Pressure Vessels and Piping Conference, PVP-Vol. 215, p.147, ASME, NY, 1991). The rationalemore » for this exclusion is that at higher temperatures it is not feasible to decouple plasticity and creep, which is the basis for the current simplified rules. This temperature, 650°C (1200°F), is well below the temperature range of interest for this material for the high temperature gas-cooled reactors and the very high temperature gas-cooled reactors. The only current alternative is, thus, a full inelastic analysis requiring sophisticated material models that have not yet been formulated and verified. To address these issues, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (EPP) analysis methods applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature (Carter, Jetter and Sham, Proceedings of ASME 2012 Pressure Vessels and Piping Conference, papers PVP 2012 28082 and PVP 2012 28083, ASME, NY, 2012), and have been recently revised to incorporate comments and simplify their application. Background documents have been developed for these two code cases to support the ASME Code committee approval process. These background documents for the EPP strain limits and creep-fatigue code cases are documented in this report.« less

  7. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  8. Possibility-induced simplified neutrosophic aggregation operators and their application to multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Liu, Peide

    2017-07-01

    Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.

  9. Lubrication Flows.

    ERIC Educational Resources Information Center

    Papanastasiou, Tasos C.

    1989-01-01

    Discusses fluid mechanics for undergraduates including the differential Navier-Stokes equations, dimensional analysis and simplified dimensionless numbers, control volume principles, the Reynolds lubrication equation for confined and free surface flows, capillary pressure, and simplified perturbation techniques. Provides a vertical dip coating…

  10. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    PubMed Central

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  11. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    PubMed

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  12. Simplified fatigue life analysis for traction drive contacts

    NASA Technical Reports Server (NTRS)

    Rohn, D. A.; Loewenthal, S. H.; Coy, J. J.

    1980-01-01

    A simplified fatigue life analysis for traction drive contacts of arbitrary geometry is presented. The analysis is based on the Lundberg-Palmgren theory used for rolling-element bearings. The effects of torque, element size, speed, contact ellipse ratio, and the influence of traction coefficient are shown. The analysis shows that within the limits of the available traction coefficient, traction contacts exhibit longest life at high speeds. Multiple, load-sharing roller arrangements have an advantageous effect on system life, torque capacity, power-to-weight ratio and size.

  13. Simplified computational methods for elastic and elastic-plastic fracture problems

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.

    1992-01-01

    An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.

  14. Analysis of glyphosate and aminomethylphosphonic acid in leaves from Coffea arabica using high performance liquid chromatography with quadrupole mass spectrometry detection.

    PubMed

    Schrübbers, Lars C; Masís-Mora, Mario; Rojas, Elizabeth Carazo; Valverde, Bernal E; Christensen, Jan H; Cedergreen, Nina

    2016-01-01

    Glyphosate is a commonly applied herbicide in coffee plantations. Because of its non-selective mode of action it can damage the crop exposed through spray drift. Therefore, it is of interest to study glyphosate fate in coffee plants. The aim of this study was to develop an analytical method for accurate and precise quantification of glyphosate and its main metabolite aminomethylphosphonic acid (AMPA) at trace levels in coffee leaves using liquid chromatography with single-quadrupole mass spectrometry detection. The method is based on a two-step solid phase extraction (SPE) with an intermediate derivatization reaction using 9-fluorenylmethylchloroformate (FMOC). An isotope dilution method was used to account for matrix effects and to enhance the confidence in analyte identification. The limit of quantification (LOQ) for glyphosate and AMPA in coffee leaves was 41 and 111 μg kg(-1) dry weight, respectively. For the method optimization a design of experiments (DOE) approach was used. The sample clean-up procedure can be simplified for the analysis of less challenging matrices, for laboratories having a tandem mass spectrometry detector and for cases in which quantification limits above 0.1 mg kg(-1) are acceptable, which is often the case for glyphosate. The method is robust, possesses high identification confidence, while being suitable for most commercial and academic laboratories. All leaf samples from five coffee fields analyzed (n=21) contained glyphosate, while AMPA was absent. The simplified clean-up procedure was successfully validated for coffee leaves, rice, black beans and river water. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Environmental analysis Waste Isolation Pilot Plant (WIPP) cost reduction proposals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Waste Isolation Pilot Plant (WIPP) is a research and development facility to demonstrate the safe disposal of radioactive wastes resulting from the defense activities and programs of the United States government. The facility is planned to be developed in bedded salt at the Los Medanos site in southeastern New Mexico. The environmental consequences of contruction and operation of the WIPP facility are documented in ''Final Environmental Impact Statement, Waste Isolation Pilot Plant''. The proposed action addressed by this environmental analysis is to simplify and reduce the scope of the WIPP facility as it is currently designed. The proposed changesmore » to the existing WIPP design are: limit the waste storage rate to 500,000 cubic feet per year; eliminate one shaft and revise the underground ventilation system; eliminate the underground conveyor system; combine the Administration Building, the Underground Personnel Building and the Waste Handling Building office area; simplify the central monitoring system; simplify the security control systems; modify the Waste Handling Building; simplify the storage exhaust system; modify the above ground salt handling logistics; simplify the power system; reduce overall site features; simplify the Warehouse/Shops Building and eliminate the Vehicle Maintenance Building; and allow resource recovery in Control Zone IV.« less

  16. Spectroscopy by joint spectral and time domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Szkulmowski, Maciej; Tamborski, Szymon; Wojtkowski, Maciej

    2015-03-01

    We present the methodology for spectroscopic examination of absorbing media being the combination of Spectral Optical Coherence Tomography and Fourier Transform Spectroscopy. The method bases on the joint Spectral and Time OCT computational scheme and simplifies data analysis procedure as compared to the mostly used windowing-based Spectroscopic OCT methods. The proposed experimental setup is self-calibrating in terms of wavelength-pixel assignment. The performance of the method in measuring absorption spectrum was checked with the use of the reflecting phantom filled with the absorbing agent (indocyanine green). The results show quantitative accordance with the controlled exact results provided by the reference method.

  17. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  18. Equivalent model optimization with cyclic correction approximation method considering parasitic effect for thermoelectric coolers.

    PubMed

    Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi

    2017-11-21

    As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.

  19. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Robert E.

    2015-12-08

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  20. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Robert B.

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  1. A simplified Forest Inventory and Analysis database: FIADB-Lite

    Treesearch

    Patrick D. Miles

    2008-01-01

    This publication is a simplified version of the Forest Inventory and Analysis Data Base (FIADB) for users who do not need to compute sampling errors and may find the FIADB unnecessarily complex. Possible users include GIS specialists who may be interested only in identifying and retrieving geographic information and per acre values for the set of plots used in...

  2. Photographic and drafting techniques simplify method of producing engineering drawings

    NASA Technical Reports Server (NTRS)

    Provisor, H.

    1968-01-01

    Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.

  3. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  4. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  5. Simplified analysis and optimization of space base and space shuttle heat rejection systems

    NASA Technical Reports Server (NTRS)

    Wulff, W.

    1972-01-01

    A simplified radiator system analysis was performed to predict steady state radiator system performance. The system performance was found to be describable in terms of five non-dimensional system parameters. The governing differential equations are integrated numerically to yield the enthalpy rejection for the coolant fluid. The simplified analysis was extended to produce the derivatives of the coolant exit temperature with respect to the governing system parameters. A procedure was developed to find the optimum set of system parameters which yields the lowest possible coolant exit temperature for either a given projected area or a given total mass. The process can be inverted to yield either the minimum area or the minimum mass, together with the optimum geometry, for a specified heat rejection rate.

  6. Generalized vegetation map of north Merrit Island based on a simplified multispectral analysis

    NASA Technical Reports Server (NTRS)

    Poonai, P.; Floyd, W. J.; Rahmani, M. A.

    1977-01-01

    A simplified system for classification of multispectral data was used for making a generalized map of ground features of North Merritt Island. Subclassification of vegetation within broad categories yielded promising results which led to a completely automatic method and to the production of satisfactory detailed maps. Changes in an area north of Happy Hammocks are evidently related to water relations of the soil and are not associated with the last winter freeze-damage which affected mainly the mangrove species, likely to reestablish themselves by natural processes. A supplementary investigation involving reflectance studies in the laboratory has shown that the reflectance by detached citrus leaves, of wavelengths lying between 400 microns and 700 microns, showed some variation over a period of seven days during which the leaves were kept in a laboratory atmosphere.

  7. Boundary element analysis of corrosion problems for pumps and pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyasaka, M.; Amaya, K.; Kishimoto, K.

    1995-12-31

    Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less

  8. Turbulent Dispersion Modelling in a Complex Urban Environment - Data Analysis and Model Development

    DTIC Science & Technology

    2010-02-01

    Technology Laboratory (Dstl) is used as a benchmark for comparison. Comparisons are also made with some more practically oriented computational fluid dynamics...predictions. To achieve clarity in the range of approaches available for practical models of con- taminant dispersion in urban areas, an overview of...complexity of those methods is simplified to a degree that allows straightforward practical implementation and application. Using these results as a

  9. Lagrangian methods in nonlinear plasma wave interaction

    NASA Technical Reports Server (NTRS)

    Crawford, F. W.

    1980-01-01

    Analysis of nonlinear plasma wave interactions is usually very complicated, and simplifying mathematical approaches are highly desirable. The application of averaged-Lagrangian methods offers a considerable reduction in effort, with improved insight into synchronism and conservation (Manley-Rowe) relations. This chapter indicates how suitable Lagrangian densities have been defined, expanded, and manipulated to describe nonlinear wave-wave and wave-particle interactions in the microscopic, macroscopic and cold plasma models. Recently, further simplifications have been introduced by the use of techniques derived from Lie algebra. These and likely future developments are reviewed briefly.

  10. Mining dynamic noteworthy functions in software execution sequences

    PubMed Central

    Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong

    2017-01-01

    As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely. PMID:28278276

  11. Nonlinear transient analysis of multi-mass flexible rotors - theory and applications

    NASA Technical Reports Server (NTRS)

    Kirk, R. G.; Gunter, E. J.

    1973-01-01

    The equations of motion necessary to compute the transient response of multi-mass flexible rotors are formulated to include unbalance, rotor acceleration, and flexible damped nonlinear bearing stations. A method of calculating the unbalance response of flexible rotors from a modified Myklestad-Prohl technique is discussed in connection with the method of solution for the transient response. Several special cases of simplified rotor-bearing systems are presented and analyzed for steady-state response, stability, and transient behavior. These simplified rotor models produce extensive design information necessary to insure stable performance to elastic mounted rotor-bearing systems under varying levels and forms of excitation. The nonlinear journal bearing force expressions derived from the short bearing approximation are utilized in the study of the stability and transient response of the floating bush squeeze damper support system. Both rigid and flexible rotor models are studied, and results indicate that the stability of flexible rotors supported by journal bearings can be greatly improved by the use of squeeze damper supports. Results from linearized stability studies of flexible rotors indicate that a tuned support system can greatly improve the performance of the units from the standpoint of unbalanced response and impact loading. Extensive stability and design charts may be readily produced for given rotor specifications by the computer codes presented in this analysis.

  12. Performance Analysis and Design Synthesis (PADS) computer program. Volume 2: Program description, part 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.

  13. Simplified Asset Indices to Measure Wealth and Equity in Health Programs: A Reliability and Validity Analysis Using Survey Data From 16 Countries

    PubMed Central

    Chakraborty, Nirali M; Fry, Kenzo; Behl, Rasika; Longfield, Kim

    2016-01-01

    ABSTRACT Background: Social franchising programs in low- and middle-income countries have tried using the standard wealth index, based on the Demographic and Health Survey (DHS) questionnaire, in client exit interviews to assess clients’ relative wealth compared with the national wealth distribution to ensure equity in service delivery. The large number of survey questions required to capture the wealth index variables have proved cumbersome for programs. Methods: Using an adaptation of the Delphi method, we developed shortened wealth indices and in February 2015 consulted 15 stakeholders in equity measurement. Together, we selected the best of 5 alternative indices, accompanied by 2 measures of agreement (percent agreement and Cohen’s kappa statistic) comparing wealth quintile assignment in the new indices to the full DHS index. The panel agreed that reducing the number of assets was more important than standardization across countries because a short index would provide strong indication of client wealth and be easier to collect and use in the field. Additionally, the panel agreed that the simplified index should be highly correlated with the DHS for each country (kappa ≥ 0.75) for both national and urban-specific samples. We then revised indices for 16 countries and selected the minimum number of questions and question options required to achieve a kappa statistic ≥ 0.75 for both national and urban populations. Findings: After combining the 5 wealth quintiles into 3 groups, which the expert panel deemed more programmatically meaningful, reliability between the standard DHS wealth index and each of 3 simplified indices was high (median kappa = 0.81, 086, and 0.77, respectively, for index B that included only the common questions from the DHS VI questionnaire, index D that included the common questions plus country-specific questions, and index E that found the shortest list of common and country-specific questions that met the minimum reliability criteria of kappa ≥ 0.75). Index E was the simplified index of choice because it was reliable in national and urban contexts while requiring the fewest number of survey questions—6 to 18 per country compared with 25 to 47 in the original DHS wealth index (a 66% average reduction). Conclusion: Social franchise clinics and other types of service delivery programs that want to assess client wealth in relation to a national or urban population can do so with high reliability using a short questionnaire. Future uses of the simplified asset questionnaire include a mobile application for rapid data collection and analysis. PMID:27016550

  14. Time-resolved x-ray scattering instrumentation

    DOEpatents

    Borso, C.S.

    1985-11-21

    An apparatus and method for increased speed and efficiency of data compilation and analysis in real time is presented in this disclosure. Data is sensed and grouped in combinations in accordance with predetermined logic. The combinations are grouped so that a simplified reduced signal results, such as pairwise summing of data values having offsetting algebraic signs, thereby reducing the magnitude of the net pair sum. Bit storage requirements are reduced and speed of data compilation and analysis is increased by manipulation of shorter bit length data values, making real time evaluation possible.

  15. Computational tools for multi-linked flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.

    1990-01-01

    A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.

  16. Application of a simplified definition of diastolic function in severe sepsis and septic shock.

    PubMed

    Lanspa, Michael J; Gutsche, Andrea R; Wilson, Emily L; Olsen, Troy D; Hirshberg, Eliotte L; Knox, Daniel B; Brown, Samuel M; Grissom, Colin K

    2016-08-04

    Left ventricular diastolic dysfunction is common in patients with severe sepsis or septic shock, but the best approach to categorization is unknown. We assessed the association of common measures of diastolic function with clinical outcomes and tested the utility of a simplified definition of diastolic dysfunction against the American Society of Echocardiography (ASE) 2009 definition. In this prospective observational study, patients with severe sepsis or septic shock underwent transthoracic echocardiography within 24 h of onset of sepsis (median 4.3 h). We measured echocardiographic parameters of diastolic function and used random forest analysis to assess their association with clinical outcomes (28-day mortality and ICU-free days to day 28) and thereby suggest a simplified definition. We then compared patients categorized by the ASE 2009 definition and our simplified definition. We studied 167 patients. The ASE 2009 definition categorized only 35 % of patients. Random forest analysis demonstrated that the left atrial volume index and deceleration time, central to the ASE 2009 definition, were not associated with clinical outcomes. Our simplified definition used only e' and E/e', omitting the other measurements. The simplified definition categorized 87 % of patients. Patients categorized by either ASE 2009 or our novel definition had similar clinical outcomes. In both definitions, worsened diastolic function was associated with increased prevalence of ischemic heart disease, diabetes, and hypertension. A novel, simplified definition of diastolic dysfunction categorized more patients with sepsis than ASE 2009 definition. Patients categorized according to the simplified definition did not differ from patients categorized according to the ASE 2009 definition in respect to clinical outcome or comorbidities.

  17. A randomized controlled trial of the different impression methods for the complete denture fabrication: Patient reported outcomes.

    PubMed

    Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke

    2015-08-01

    To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A Simplified and Systematic Method to Isolate, Culture, and Characterize Multiple Types of Human Dental Stem Cells from a Single Tooth.

    PubMed

    Bakkar, Mohammed; Liu, Younan; Fang, Dongdong; Stegen, Camille; Su, Xinyun; Ramamoorthi, Murali; Lin, Li-Chieh; Kawasaki, Takako; Makhoul, Nicholas; Pham, Huan; Sumita, Yoshinori; Tran, Simon D

    2017-01-01

    This chapter describes a simplified method that allows the systematic isolation of multiple types of dental stem cells such as dental pulp stem cells (DPSC), periodontal ligament stem cells (PDLSC), and stem cells of the apical papilla (SCAP) from a single tooth. Of specific interest is the modified laboratory approach to harvest/retrieve the dental pulp tissue by minimizing trauma to DPSC by continuous irrigation, reduction of frictional heat from the bur rotation, and reduction of the bur contact time with the dentin. Also, the use of a chisel and a mallet will maximize the number of live DPSC for culture. Steps demonstrating the potential for multiple cell differentiation lineages of each type of dental stem cell into either osteocytes, adipocytes, or chondrocytes are described. Flow cytometry, with a detailed strategy for cell gating and analysis, is described to verify characteristic markers of human mesenchymal multipotent stromal cells (MSC) from DPSC, PDLSC, or SCAP for subsequent experiments in cell therapy and in tissue engineering. Overall, this method can be adapted to any laboratory with a general setup for cell culture experiments.

  19. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  20. TH-AB-201-10: Portal Dosimetry with Elekta IViewDose:Performance of the Simplified Commissioning Approach Versus Full Commissioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kydonieos, M; Folgueras, A; Florescu, L

    2016-06-15

    Purpose: Elekta recently developed a solution for in-vivo EPID dosimetry (iViewDose, Elekta AB, Stockholm, Sweden) in conjunction with the Netherlands Cancer Institute (NKI). This uses a simplified commissioning approach via Template Commissioning Models (TCMs), consisting of a subset of linac-independent pre-defined parameters. This work compares the performance of iViewDose using a TCM commissioning approach with that corresponding to full commissioning. Additionally, the dose reconstruction based on the simplified commissioning approach is validated via independent dose measurements. Methods: Measurements were performed at the NKI on a VersaHD™ (Elekta AB, Stockholm, Sweden). Treatment plans were generated with Pinnacle 9.8 (Philips Medical Systems,more » Eindhoven, The Netherlands). A farmer chamber dose measurement and two EPID images were used to create a linac-specific commissioning model based on a TCM. A complete set of commissioning measurements was collected and a full commissioning model was created.The performance of iViewDose based on the two commissioning approaches was compared via a series of set-to-work tests in a slab phantom. In these tests, iViewDose reconstructs and compares EPID to TPS dose for square fields, IMRT and VMAT plans via global gamma analysis and isocentre dose difference. A clinical VMAT plan was delivered to a homogeneous Octavius 4D phantom (PTW, Freiburg, Germany). Dose was measured with the Octavius 1500 array and VeriSoft software was used for 3D dose reconstruction. EPID images were acquired. TCM-based iViewDose and 3D Octavius dose distributions were compared against the TPS. Results: For both the TCM-based and the full commissioning approaches, the pass rate, mean γ and dose difference were >97%, <0.5 and <2.5%, respectively. Equivalent gamma analysis results were obtained for iViewDose (TCM approach) and Octavius for a VMAT plan. Conclusion: iViewDose produces similar results with the simplified and full commissioning approaches. Good agreement is obtained between iViewDose (simplified approach) and the independent measurement tool. This research is funded by Elekta Limited.« less

  1. Comparison of two trajectory based models for locating particle sources for two rural New York sites

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Hopke, Philip K.; Liu, Wei

    Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.

  2. Study on the Influence of Elevation of Tailing Dam on Stability

    NASA Astrophysics Data System (ADS)

    Wan, Shuai; Wang, Kun; Kong, Songtao; Zhao, Runan; Lan, Ying; Zhang, Run

    2017-12-01

    This paper takes Yunnan as the object of a tailing, by theoretical analysis and numerical calculation method of the effect of seismic load effect of elevation on the stability of the tailing, to analyse the stability of two point driven safety factor and liquefaction area. The Bishop method is adopted to simplify the calculation of dynamic safety factor and liquefaction area analysis using comparison method of shear stress to analyse liquefaction, so we obtained the influence of elevation on the stability of the tailing. Under the earthquake, with the elevation increased, the safety coefficient of dam body decreases, shallow tailing are susceptible to liquefy. Liquefaction area mainly concentrated in the bank below the water surface, to improve the scientific basis for the design and safety management of the tailing.

  3. On the line-shape analysis of Compton profiles and its application to neutron scattering

    NASA Astrophysics Data System (ADS)

    Romanelli, G.; Krzystyniak, M.

    2016-05-01

    Analytical properties of Compton profiles are used in order to simplify the analysis of neutron Compton scattering experiments. In particular, the possibility to fit the difference of Compton profiles is discussed as a way to greatly decrease the level of complexity of the data treatment, making the analysis easier, faster and more robust. In the context of the novel method proposed, two mathematical models describing the shapes of differenced Compton profiles are discussed: the simple Gaussian approximation for harmonic and isotropic local potential, and an analytical Gauss-Hermite expansion for an anharmonic or anisotropic potential. The method is applied to data collected by VESUVIO spectrometer at ISIS neutron and muon pulsed source (UK) on Copper and Aluminium samples at ambient and low temperatures.

  4. 48 CFR 713.000 - Scope of part.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...

  5. 48 CFR 713.000 - Scope of part.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...

  6. A flow-cytometry-based method to simplify the analysis and quantification of protein association to chromatin in mammalian cells

    PubMed Central

    Forment, Josep V.; Jackson, Stephen P.

    2016-01-01

    Protein accumulation on chromatin has traditionally been studied using immunofluorescence microscopy or biochemical cellular fractionation followed by western immunoblot analysis. As a way to improve the reproducibility of this kind of analysis, make it easier to quantify and allow a stream-lined application in high-throughput screens, we recently combined a classical immunofluorescence microscopy detection technique with flow cytometry1. In addition to the features described above, and by combining it with detection of both DNA content and DNA replication, this method allows unequivocal and direct assignment of cell-cycle distribution of protein association to chromatin without the need for cell culture synchronization. Furthermore, it is relatively quick (no more than a working day from sample collection to quantification), requires less starting material compared to standard biochemical fractionation methods and overcomes the need for flat, adherent cell types that are required for immunofluorescence microscopy. PMID:26226461

  7. The influence of a wind tunnel on helicopter rotational noise: Formulation of analysis

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    An analytical model is discussed that can be used to examine the effects of wind tunnel walls on helicopter rotational noise. A complete physical model of an acoustic source in a wind tunnel is described and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. The simplified physical model is then modeled as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. Details of generating a suitable Green's function and integral equation are included and the equation is discussed and also given for a two-dimensional case.

  8. Modal kinematics for multisection continuum arms.

    PubMed

    Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G

    2015-05-13

    This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.

  9. Development of a numerical model for vehicle-bridge interaction analysis of railway bridges

    NASA Astrophysics Data System (ADS)

    Kim, Hee Ju; Cho, Eun Sang; Ham, Jun Su; Park, Ki Tae; Kim, Tae Heon

    2016-04-01

    In the field of civil engineering, analyzing dynamic response was main concern for a long time. These analysis methods can be divided into moving load analysis method and moving mass analysis method, and formulating each an equation of motion has recently been studied after dividing vehicles and bridges. In this study, the numerical method is presented, which can consider the various train types and can solve the equations of motion for a vehicle-bridge interaction analysis by non-iteration procedure through formulating the coupled equations for motion. Also, 3 dimensional accurate numerical models was developed by KTX-vehicle in order to analyze dynamic response characteristics. The equations of motion for the conventional trains are derived, and the numerical models of the conventional trains are idealized by a set of linear springs and dashpots with 18 degrees of freedom. The bridge models are simplified by the 3 dimensional space frame element which is based on the Euler-Bernoulli theory. The rail irregularities of vertical and lateral directions are generated by PSD functions of the Federal Railroad Administration (FRA).

  10. Rubble masonry response under cyclic actions: The experience of L’Aquila city (Italy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonti, Roberta, E-mail: roberta.fonti@tum.de; Barthel, Rainer, E-mail: r.barthel@lrz.tu-muenchen.de; Formisano, Antonio, E-mail: antoform@unina.it

    2015-12-31

    Several methods of analysis are available in engineering practice to study old masonry constructions. Two commonly used approaches in the field of seismic engineering are global and local analyses. Despite several years of research in this field, the various methodologies suffer from a lack of comprehensive experimental validation. This is mainly due to the difficulty in simulating the many different kinds of masonry and, accordingly, the non-linear response under horizontal actions. This issue can be addressed by examining the local response of isolated panels under monotonic and/or alternate actions. Different testing methodologies are commonly used to identify the local responsemore » of old masonry. These range from simplified pull-out tests to sophisticated in-plane monotonic tests. However, there is a lack of both knowledge and critical comparison between experimental validations and numerical simulations. This is mainly due to the difficulties in implementing irregular settings within both simplified and advanced numerical analyses. Similarly, the simulation of degradation effects within laboratory tests is difficult with respect to old masonry in-situ boundary conditions. Numerical models, particularly on rubble masonry, are commonly simplified. They are mainly based on a kinematic chain of rigid blocks able to perform different “modes of damage” of structures subjected to horizontal actions. This paper presents an innovative methodology for testing; its aim is to identify a simplified model for out-of-plane response of rubbleworks with respect to the experimental evidence. The case study of L’Aquila district is discussed.« less

  11. Charting a path forward: policy analysis of China's evolved DRG-based hospital payment system

    PubMed Central

    Liu, Rui; Shi, Jianwei; Yang, Beilei; Jin, Chunlin; Sun, Pengfei; Wu, Lingfang; Yu, Dehua; Xiong, Linping; Wang, Zhaoxin

    2017-01-01

    Abstract Background At present, the diagnosis-related groups-based prospective payment system (DRG-PPS) that has been implemented in China is merely a prototype called the simplified DRG-PPS, which is known as the ‘ceiling price for a single disease’. Given that studies on the effects of a simplified DRG-PPS in China have usually been controversial, we aim to synthesize evidence examining whether DRGs can reduce medical costs and length of stay (LOS) in China. Methods Data were searched from both Chinese [Wan Fang and China National Knowledge Infrastructure Database (CNKI)] and international databases (Web of Science and PubMed), as well as the official websites of Chinese health departments in the 2004–2016 period. Only studies with a design that included both experimental (with DRG-PPS implementation) and control groups (without DRG-PPS implementation) were included in the review. Results The studies were based on inpatient samples from public hospitals distributed in 12 provinces of mainland China. Among them, 80.95% (17/21) revealed that hospitalization costs could be reduced significantly, and 50.00% (8/16) indicated that length of stay could be decreased significantly. In addition, the government reports showed the enormous differences in pricing standards and LOS in various provinces, even for the same disease. Conclusions We conclude that the simplified DRGs are useful in controlling hospitalization costs, but they fail to reduce LOS. Much work remains to be done in China to improve the simplified DRG-PPS. PMID:28911128

  12. Simplified hydrodynamic analysis on the general shape of the hill charts of Francis turbines using shroud-streamline modeling

    NASA Astrophysics Data System (ADS)

    Iliev, I.; Trivedi, C.; Dahlhaug, O. G.

    2018-06-01

    The paper presents a simplified one-dimensional calculation of the efficiency hill-chart for Francis turbines, based on the velocity triangles at the inlet and outlet of the runner’s blade. Calculation is done for one streamline, namely the shroud streamline in the meridional section, where an efficiency model is established and iteratively approximated in order to satisfy the Euler equation for turbomachines at a wide operating range around the best efficiency point (BEP). Using the presented method, hill charts are calculated for one splitter-bladed Francis turbine runner and one Reversible Pump-Turbine (RPT) runner operated in the turbine mode. Both turbines have similar and relatively low specific speeds of nsQ = 23.3 and nsQ = 27, equal inlet and outlet diameters and are designed to fit in the same turbine rig for laboratory measurements (i.e. spiral casing and draft tube are the same). Calculated hill charts are compared against performance data obtained experimentally from model tests according to IEC standards for both turbines. Good agreement between theoretical and experimental results is observed when comparing the shapes of the efficiency contours in the hill-charts. The simplified analysis identifies the design parameters that defines the general shape and inclination of the turbine’s hill charts and, with some additional improvements in the loss models used, it can be used for quick assessment of the performance at off-design conditions during the design process of hydraulic turbines.

  13. Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Johnston, Christopher O.

    2008-01-01

    By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).

  14. Report on FY17 testing in support of integrated EPP-SMT design methods development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yanli .; Jetter, Robert I.; Sham, T. -L.

    The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate a SMT data-based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The purpose of this methodology is to minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, thermomechanical tests continued in FY17. Thismore » report presents the recent test results for Type 1 SMT specimens on Alloy 617 with long hold times, pressurization SMT on Alloy 617, and two-bar thermal ratcheting test results on SS316H at the temperature range of 405 °C to 705 °C. Preliminary EPP strain range analysis on the two-bar tests are critically evaluated and compared with the experimental results.« less

  15. Prediction of high temperature metal matrix composite ply properties

    NASA Technical Reports Server (NTRS)

    Caruso, J. J.; Chamis, C. C.

    1988-01-01

    The application of the finite element method (superelement technique) in conjunction with basic concepts from mechanics of materials theory is demonstrated to predict the thermomechanical behavior of high temperature metal matrix composites (HTMMC). The simulated behavior is used as a basis to establish characteristic properties of a unidirectional composite idealized an as equivalent homogeneous material. The ply properties predicted include: thermal properties (thermal conductivities and thermal expansion coefficients) and mechanical properties (moduli and Poisson's ratio). These properties are compared with those predicted by a simplified, analytical composite micromechanics model. The predictive capabilities of the finite element method and the simplified model are illustrated through the simulation of the thermomechanical behavior of a P100-graphite/copper unidirectional composite at room temperature and near matrix melting temperature. The advantage of the finite element analysis approach is its ability to more precisely represent the composite local geometry and hence capture the subtle effects that are dependent on this. The closed form micromechanics model does a good job at representing the average behavior of the constituents to predict composite behavior.

  16. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  17. Critical assessment of inverse gas chromatography as means of assessing surface free energy and acid-base interaction of pharmaceutical powders.

    PubMed

    Telko, Martin J; Hickey, Anthony J

    2007-10-01

    Inverse gas chromatography (IGC) has been employed as a research tool for decades. Despite this record of use and proven utility in a variety of applications, the technique is not routinely used in pharmaceutical research. In other fields the technique has flourished. IGC is experimentally relatively straightforward, but analysis requires that certain theoretical assumptions are satisfied. The assumptions made to acquire some of the recently reported data are somewhat modified compared to initial reports. Most publications in the pharmaceutical literature have made use of a simplified equation for the determination of acid/base surface properties resulting in parameter values that are inconsistent with prior methods. In comparing the surface properties of different batches of alpha-lactose monohydrate, new data has been generated and compared with literature to allow critical analysis of the theoretical assumptions and their importance to the interpretation of the data. The commonly used (simplified) approach was compared with the more rigorous approach originally outlined in the surface chemistry literature. (c) 2007 Wiley-Liss, Inc.

  18. Computational study of single-expansion-ramp nozzles with external burning

    NASA Astrophysics Data System (ADS)

    Yungster, Shaye; Trefny, Charles J.

    1992-04-01

    A computational investigation of the effects of external burning on the performance of single expansion ramp nozzles (SERN) operating at transonic speeds is presented. The study focuses on the effects of external heat addition and introduces a simplified injection and mixing model based on a control volume analysis. This simplified model permits parametric and scaling studies that would have been impossible to conduct with a detailed CFD analysis. The CFD model is validated by comparing the computed pressure distribution and thrust forces, for several nozzle configurations, with experimental data. Specific impulse calculations are also presented which indicate that external burning performance can be superior to other methods of thrust augmentation at transonic speeds. The effects of injection fuel pressure and nozzle pressure ratio on the performance of SERN nozzles with external burning are described. The results show trends similar to those reported in the experimental study, and provide additional information that complements the experimental data, improving our understanding of external burning flowfields. A study of the effect of scale is also presented. The results indicate that combustion kinetics do not make the flowfield sensitive to scale.

  19. Computational study of single-expansion-ramp nozzles with external burning

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Trefny, Charles J.

    1992-01-01

    A computational investigation of the effects of external burning on the performance of single expansion ramp nozzles (SERN) operating at transonic speeds is presented. The study focuses on the effects of external heat addition and introduces a simplified injection and mixing model based on a control volume analysis. This simplified model permits parametric and scaling studies that would have been impossible to conduct with a detailed CFD analysis. The CFD model is validated by comparing the computed pressure distribution and thrust forces, for several nozzle configurations, with experimental data. Specific impulse calculations are also presented which indicate that external burning performance can be superior to other methods of thrust augmentation at transonic speeds. The effects of injection fuel pressure and nozzle pressure ratio on the performance of SERN nozzles with external burning are described. The results show trends similar to those reported in the experimental study, and provide additional information that complements the experimental data, improving our understanding of external burning flowfields. A study of the effect of scale is also presented. The results indicate that combustion kinetics do not make the flowfield sensitive to scale.

  20. Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods

    NASA Technical Reports Server (NTRS)

    Adams, G. F.

    1980-01-01

    The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.

  1. Storage Capacity of the Linear Associator: Beginnings of a Theory of Computational Memory

    DTIC Science & Technology

    1988-04-27

    Issues valuable to future efforts and provides methods for analysis of perceptual/ cognitive systems. vii Table of Contents 1. Introduction...not only enables a system to vastly simplify its representation of the environment, but the identification of such symbols In a cognitive system could...subse4 uently provide a parsimonious theory of cognition (Yes, I know, *traditional AI already knows this). Not that the Identification would be easy

  2. Application of a simplified theory of ELF propagation to a simplified worldwide model of the ionosphere

    NASA Astrophysics Data System (ADS)

    Behroozi-Toosi, A. B.; Booker, H. G.

    1980-12-01

    The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.

  3. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides.

    PubMed

    Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd

    2013-01-17

    Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.

  4. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Cheng; Ding, Dazhi, E-mail: dzding@njust.edu.cn; Fan, Zhenhong

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gasmore » chamber.« less

  5. A simplified method for extracting androgens from avian egg yolks

    USGS Publications Warehouse

    Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.

    2009-01-01

    Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.

  6. Memory functions reveal structural properties of gene regulatory networks

    PubMed Central

    Perez-Carrasco, Ruben

    2018-01-01

    Gene regulatory networks (GRNs) control cellular function and decision making during tissue development and homeostasis. Mathematical tools based on dynamical systems theory are often used to model these networks, but the size and complexity of these models mean that their behaviour is not always intuitive and the underlying mechanisms can be difficult to decipher. For this reason, methods that simplify and aid exploration of complex networks are necessary. To this end we develop a broadly applicable form of the Zwanzig-Mori projection. By first converting a thermodynamic state ensemble model of gene regulation into mass action reactions we derive a general method that produces a set of time evolution equations for a subset of components of a network. The influence of the rest of the network, the bulk, is captured by memory functions that describe how the subnetwork reacts to its own past state via components in the bulk. These memory functions provide probes of near-steady state dynamics, revealing information not easily accessible otherwise. We illustrate the method on a simple cross-repressive transcriptional motif to show that memory functions not only simplify the analysis of the subnetwork but also have a natural interpretation. We then apply the approach to a GRN from the vertebrate neural tube, a well characterised developmental transcriptional network composed of four interacting transcription factors. The memory functions reveal the function of specific links within the neural tube network and identify features of the regulatory structure that specifically increase the robustness of the network to initial conditions. Taken together, the study provides evidence that Zwanzig-Mori projections offer powerful and effective tools for simplifying and exploring the behaviour of GRNs. PMID:29470492

  7. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  8. Research on carrying capacity of hydrostatic slideway on heavy-duty gantry CNC machine

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Guo, Tieneng; Wang, Yijie; Dai, Qin

    2017-05-01

    Hydrostatic slideway is a key part in the heavy-duty gantry CNC machine, which supports the total weight of the gantry and moves smoothly along the table. Therefore, the oil film between sliding rails plays an important role on the carrying capacity and precision of machine. In this paper, the oil film in no friction is simulated with three-dimensional CFD. The carrying capacity of heavy hydrostatic slideway, pressure and velocity characteristic of the flow field are analyzed. The simulation result is verified through comparing with the experimental data obtained from the heavy-duty gantry machine. For the requirement of engineering, the oil film carrying capacity is analyzed with simplified theoretical method. The precision of the simplified method is evaluated and the effectiveness is verified with the experimental data. The simplified calculation method is provided for designing oil pad on heavy-duty gantry CNC machine hydrostatic slideway.

  9. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  10. Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit

    PubMed Central

    O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R

    2008-01-01

    Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109

  11. Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.

    PubMed

    O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R

    2008-03-09

    Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.

  12. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  13. A Simplified Method for Implementing Run-Time Polymorphism in Fortran95

    DOE PAGES

    Decyk, Viktor K.; Norton, Charles D.

    2004-01-01

    This paper discusses a simplified technique for software emulation of inheritance and run-time polymorphism in Fortran95. This technique involves retaining the same type throughout an inheritance hierarchy, so that only functions which are modified in a derived class need to be implemented.

  14. Algorithm of reducing the false positives in IDS based on correlation Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Jianyi; Li, Sida; Zhang, Ru

    2018-03-01

    This paper proposes an algorithm of reducing the false positives in IDS based on correlation Analysis. Firstly, the algorithm analyzes the distinguishing characteristics of false positives and real alarms, and preliminary screen the false positives; then use the method of attribute similarity clustering to the alarms and further reduces the amount of alarms; finally, according to the characteristics of multi-step attack, associated it by the causal relationship. The paper also proposed a reverse causation algorithm based on the attack association method proposed by the predecessors, turning alarm information into a complete attack path. Experiments show that the algorithm simplifies the number of alarms, improve the efficiency of alarm processing, and contribute to attack purposes identification and alarm accuracy improvement.

  15. Two-port network analysis and modeling of a balanced armature receiver.

    PubMed

    Kim, Noori; Allen, Jont B

    2013-07-01

    Models for acoustic transducers, such as loudspeakers, mastoid bone-drivers, hearing-aid receivers, etc., are critical elements in many acoustic applications. Acoustic transducers employ two-port models to convert between acoustic and electromagnetic signals. This study analyzes a widely-used commercial hearing-aid receiver ED series, manufactured by Knowles Electronics, Inc. Electromagnetic transducer modeling must consider two key elements: a semi-inductor and a gyrator. The semi-inductor accounts for electromagnetic eddy-currents, the 'skin effect' of a conductor (Vanderkooy, 1989), while the gyrator (McMillan, 1946; Tellegen, 1948) accounts for the anti-reciprocity characteristic [Lenz's law (Hunt, 1954, p. 113)]. Aside from Hunt (1954), no publications we know of have included the gyrator element in their electromagnetic transducer models. The most prevalent method of transducer modeling evokes the mobility method, an ideal transformer instead of a gyrator followed by the dual of the mechanical circuit (Beranek, 1954). The mobility approach greatly complicates the analysis. The present study proposes a novel, simplified and rigorous receiver model. Hunt's two-port parameters, the electrical impedance Ze(s), acoustic impedance Za(s) and electro-acoustic transduction coefficient Ta(s), are calculated using ABCD and impedance matrix methods (Van Valkenburg, 1964). The results from electrical input impedance measurements Zin(s), which vary with given acoustical loads, are used in the calculation (Weece and Allen, 2010). The hearing-aid receiver transducer model is designed based on energy transformation flow [electric→ mechanic→ acoustic]. The model has been verified with electrical input impedance, diaphragm velocity in vacuo, and output pressure measurements. This receiver model is suitable for designing most electromagnetic transducers and it can ultimately improve the design of hearing-aid devices by providing a simplified yet accurate, physically motivated analysis. This article is part of a special issue entitled "MEMRO 2012". Published by Elsevier B.V.

  16. Value of information analysis in healthcare: a review of principles and applications.

    PubMed

    Tuffaha, Haitham W; Gordon, Louisa G; Scuffham, Paul A

    2014-06-01

    Economic evaluations are increasingly utilized to inform decisions in healthcare; however, decisions remain uncertain when they are not based on adequate evidence. Value of information (VOI) analysis has been proposed as a systematic approach to measure decision uncertainty and assess whether there is sufficient evidence to support new technologies. The objective of this paper is to review the principles and applications of VOI analysis in healthcare. Relevant databases were systematically searched to identify VOI articles. The findings from the selected articles were summarized and narratively presented. Various VOI methods have been developed and applied to inform decision-making, optimally designing research studies and setting research priorities. However, the application of this approach in healthcare remains limited due to technical and policy challenges. There is a need to create more awareness about VOI analysis, simplify its current methods, and align them with the needs of decision-making organizations.

  17. Structural Code Considerations for Solar Rooftop Installations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred

    2014-12-01

    Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less

  18. Psychometric Evaluation of the Simplified Chinese Version of Flourishing Scale

    ERIC Educational Resources Information Center

    Tang, Xiaoqing; Duan, Wenjie; Wang, Zhizhang; Liu, Tianyuan

    2016-01-01

    Objectives: The Flourishing Scale (FS) was developed to measure psychological well-being from the eudaimonic perspective, highlighting the flourishing of human functioning. This article evaluated the psychometric characteristics of the simplified Chinese version of FS among a Chinese community population. Method: A total of 433 participants from…

  19. An improved method for analysis of hydroxide and carbonate in alkaline electrolytes containing zinc

    NASA Technical Reports Server (NTRS)

    Reid, M. A.

    1978-01-01

    A simplified method for titration of carbonate and hydroxide in alkaline battery electrolyte is presented involving a saturated KSCN solution as a complexing agent for zinc. Both hydroxide and carbonate can be determined in one titration, and the complexing reagent is readily prepared. Since the pH at the end point is shifted from 8.3 to 7.9-8.0, m-cresol purple or phenol red are used as indicators rather than phenolphthalein. Bromcresol green is recommended for determination of the second end point of a pH of 4.3 to 4.4.

  20. An improved method for analysis of hydroxide and carbonate in alkaline electrolytes containing zinc

    NASA Technical Reports Server (NTRS)

    Reid, M. A.

    1978-01-01

    A simplified method for titration of carbonate and hydroxide in alkaline battery electrolyte is presented involving a saturated KSCN solution as a complexing agent for zinc. Both hydroxide and carbonate can be determined in one titration, and the complexing reagent is readily prepared. Since the pH at the end point is shifted from 8.3 to 7.9 - 8.0, m-cresol purple or phenol red are used as indicators rather than phenolphthalein. Bromcresol green is recommended for determination of the second end point of a pH of 4.3 to 4.4.

  1. Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.

  2. Quantitative analysis of pyroglutamic acid in peptides.

    PubMed

    Suzuki, Y; Motoi, H; Sato, K

    1999-08-01

    A simplified and rapid procedure for the determination of pyroglutamic acid in peptides was developed. The method involves the enzymatic cleavage of an N-terminal pyroglutamate residue using a thermostable pyroglutamate aminopeptidase and isocratic HPLC separation of the resulting enzymatic hydrolysate using a column switching technique. Pyroglutamate aminopeptidase from a thermophilic archaebacteria, Pyrococcus furiosus, cleaves N-terminal pyroglutamic acid residue independent of the molecular weight of the substrate. It cleaves more than 85% of pyroglutamate from peptides whose molecular weight ranges from 362.4 to 4599.4 Da. Thus, a new method is presented that quantitatively estimates N-terminal pyroglutamic acid residue in peptides.

  3. Application of a Simplified Method for Estimating Perfusion Derived from Diffusion-Weighted MR Imaging in Glioma Grading.

    PubMed

    Cao, Mengqiu; Suo, Shiteng; Han, Xu; Jin, Ke; Sun, Yawen; Wang, Yao; Ding, Weina; Qu, Jianxun; Zhang, Xiaohua; Zhou, Yan

    2017-01-01

    Purpose : To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI) acquired with three b -values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM) and dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging, and to investigate its utility to differentiate low- from high-grade gliomas. Materials and Methods : The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi- b -value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC 0,1000 ) map, perfusion-related parametric maps for IVIM-derived perfusion fraction ( f ) and pseudodiffusion coefficient (D*), DCE MR imaging-derived pharmacokinetic metrics, including K trans , v e and v p , as well as a metric named simplified perfusion fraction (SPF), were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade ( n = 19) and high-grade ( n = 31) groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC) analysis. Results : SPF showed strong correlation with IVIM-derived f and D* ( ρ = 0.732 and 0.716, respectively; both P < 0.001). Compared with f , SPF was more correlated with DCE MR imaging-derived K trans ( ρ = 0.607; P < 0.001) and v p ( ρ = 0.397; P = 0.004). Among all parameters, SPF achieved the highest accuracy for differentiating low- from high-grade gliomas, with an area under the ROC curve value of 0.942, which was significantly higher than that of ADC 0,1000 ( P = 0.004). By using SPF as a discriminative index, the diagnostic sensitivity and specificity were 87.1% and 94.7%, respectively, at the optimal cut-off value of 19.26%. Conclusion : The simplified method to measure tissue perfusion based on DWI by using three b -values may be helpful to differentiate low- from high-grade gliomas. SPF may serve as a valuable alternative to measure tumor perfusion in gliomas in a noninvasive, convenient and efficient way.

  4. Ca analysis: An Excel based program for the analysis of intracellular calcium transients including multiple, simultaneous regression analysis☆

    PubMed Central

    Greensmith, David J.

    2014-01-01

    Here I present an Excel based program for the analysis of intracellular Ca transients recorded using fluorescent indicators. The program can perform all the necessary steps which convert recorded raw voltage changes into meaningful physiological information. The program performs two fundamental processes. (1) It can prepare the raw signal by several methods. (2) It can then be used to analyze the prepared data to provide information such as absolute intracellular Ca levels. Also, the rates of change of Ca can be measured using multiple, simultaneous regression analysis. I demonstrate that this program performs equally well as commercially available software, but has numerous advantages, namely creating a simplified, self-contained analysis workflow. PMID:24125908

  5. Kernel canonical-correlation Granger causality for multiple time series

    NASA Astrophysics Data System (ADS)

    Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu

    2011-04-01

    Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.

  6. Applications of chemiluminescence to bacterial analysis

    NASA Technical Reports Server (NTRS)

    Searle, N. D.

    1975-01-01

    Luminol chemiluminescence method for detecting bacteria was based on microbial activation of the oxidation of the luminol monoanion by hydrogen peroxide. Elimination of the prior lysing step, previously used in the chemiluminescence technique, was shown to improve considerably the reproducibility and accuracy of the method in addition to simplifying it. An inexpensive, portable photomultiplier detector was used to measure the maximum light intensity produced when the sample is added to the reagent. Studies of cooling tower water show that the luminol chemiluminescence technique can be used to monitor changes in viable cell population both under normal conditions and during chlorine treatment. Good correlation between chemiluminescence and plate counts was also obtained in the analysis of process water used in paper mills. This method showed good potential for monitoring the viable bacteria populations in activated sludge used in waste treatment plants to digest organic matter.

  7. Real-time obstructive sleep apnea detection from frequency analysis of EDR and HRV using Lomb Periodogram.

    PubMed

    Fan, Shu-Han; Chou, Chia-Ching; Chen, Wei-Chen; Fang, Wai-Chi

    2015-01-01

    In this study, an effective real-time obstructive sleep apnea (OSA) detection method from frequency analysis of ECG-derived respiratory (EDR) and heart rate variability (HRV) is proposed. Compared to traditional Polysomnography (PSG) which needs several physiological signals measured from patients, the proposed OSA detection method just only use ECG signals to determine the time interval of OSA. In order to be feasible to be implemented in hardware to achieve the real-time detection and portable application, the simplified Lomb Periodogram is utilized to perform the frequency analysis of EDR and HRV in this study. The experimental results of this work indicate that the overall accuracy can be effectively increased with values of Specificity (Sp) of 91%, Sensitivity (Se) of 95.7%, and Accuracy of 93.2% by integrating the EDR and HRV indexes.

  8. Axial Crushing of Thin-Walled Columns with Octagonal Section: Modeling and Design

    NASA Astrophysics Data System (ADS)

    Liu, Yucheng; Day, Michael L.

    This chapter focus on numerical crashworthiness analysis of straight thinwalled columns with octagonal cross sections. Two important issues in this analysis are demonstrated here: computer modeling and crashworthiness design. In the first part, this chapter introduces a method of developing simplified finite element (FE) models for the straight thin-walled octagonal columns, which can be used for the numerical crashworthiness analysis. Next, this chapter performs a crashworthiness design for such thin-walled columns in order to maximize their energy absorption capability. Specific energy absorption (SEA) is set as the design objective, side length of the octagonal cross section and wall thickness are selected as design variables, and maximum crushing force (Pm) occurs during crashes is set as design constraint. Response surface method (RSM) is employed to formulate functions for both SEA and Pm.

  9. Direct identification of prohibited substances in cosmetics and foodstuffs using ambient ionization on a miniature mass spectrometry system.

    PubMed

    Ma, Qiang; Bai, Hua; Li, Wentao; Wang, Chao; Li, Xinshi; Cooks, R Graham; Ouyang, Zheng

    2016-03-17

    Significantly simplified work flows were developed for rapid analysis of various types of cosmetic and foodstuff samples by employing a miniature mass spectrometry system and ambient ionization methods. A desktop Mini 12 ion trap mass spectrometer was coupled with paper spray ionization, extraction spray ionization and slug-flow microextraction for direct analysis of Sudan Reds, parabens, antibiotics, steroids, bisphenol and plasticizer from raw samples with complex matrices. Limits of detection as low as 5 μg/kg were obtained for target analytes. On-line derivatization was also implemented for analysis of steroid in cosmetics. The developed methods provide potential analytical possibility for outside-the-lab screening of cosmetics and foodstuff products for the presence of illegal substances. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Simplified dichromated gelatin hologram recording process

    NASA Technical Reports Server (NTRS)

    Georgekutty, Tharayil G.; Liu, Hua-Kuang

    1987-01-01

    A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.

  11. Analysis of glycosaminoglycan-derived disaccharides by capillary electrophoresis using laser-induced fluorescence detection

    PubMed Central

    Chang, Yuqing; Yang, Bo; Zhao, Xue; Linhardt, Robert J.

    2012-01-01

    A quantitative and highly sensitive method for the analysis of glycosaminoglycan (GAG)-derived disaccharides is presented that relies on capillary electrophoresis (CE) with laser-induced fluorescence (LIF) detection. This method enables complete separation of seventeen GAG-derived disaccharides in a single run. Unsaturated disaccharides were derivatized with 2-aminoacridone (AMAC) to improve sensitivity. The limit of detection was at the attomole level and about 100-fold more sensitive than traditional CE-ultraviolet detection. A CE separation timetable was developed to achieve complete resolution and shorten analysis time. The RSD of migration time and peak areas at both low and high concentrations of unsaturated disaccharides are all less than 2.7% and 3.2%, respectively, demonstrating that this is a reproducible method. This analysis was successfully applied to cultured Chinese hamster ovary cell samples for determination of GAG disaccharides. The current method simplifies GAG extraction steps, and reduces inaccuracy in calculating ratios of heparin/heparan sulfate to chondroitin sulfate/dermatan sulfate, resulting from the separate analyses of a single sample. PMID:22609076

  12. Quantification of biofilm in microtiter plates: overview of testing conditions and practical recommendations for assessment of biofilm production by staphylococci.

    PubMed

    Stepanović, Srdjan; Vuković, Dragana; Hola, Veronika; Di Bonaventura, Giovanni; Djukić, Slobodanka; Cirković, Ivana; Ruzicka, Filip

    2007-08-01

    The details of all steps involved in the quantification of biofilm formation in microtiter plates are described. The presented protocol incorporates information on assessment of biofilm production by staphylococci, gained both by direct experience as well as by analysis of methods for assaying biofilm production. The obtained results should simplify quantification of biofilm formation in microtiter plates, and make it more reliable and comparable among different laboratories.

  13. Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model

    DTIC Science & Technology

    2009-08-01

    has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System

  14. A SImplified method for Segregation Analysis (SISA) to determine penetrance and expression of a genetic variant in a family.

    PubMed

    Møller, Pål; Clark, Neal; Mæhle, Lovise

    2011-05-01

    A method for SImplified rapid Segregation Analysis (SISA) to assess penetrance and expression of genetic variants in pedigrees of any complexity is presented. For this purpose the probability for recombination between the variant and the gene is zero. An assumption is that the variant of undetermined significance (VUS) is introduced into the family once only. If so, all family members in between two members demonstrated to carry a VUS, are obligate carriers. Probabilities for cosegregation of disease and VUS by chance, penetrance, and expression, may be calculated. SISA return values do not include person identifiers and need no explicit informed consent. There will be no ethical complications in submitting SISA return values to central databases. Values for several families may be combined. Values for a family may be updated by the contributor. SISA is used to consider penetrance whenever sequencing demonstrates a VUS in the known cancer-predisposing genes. Any family structure at hand in a genetic clinic may be used. One may include an extended lineage in a family through demonstrating the same VUS in a distant relative, and thereby identifying all obligate carriers in between. Such extension is a way to escape the selection biases through expanding the families outside the clusters used to select the families. © 2011 Wiley-Liss, Inc.

  15. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  16. Thumb Ossification Composite Index (TOCI) for Predicting Peripubertal Skeletal Maturity and Peak Height Velocity in Idiopathic Scoliosis

    PubMed Central

    Hung, Alec L.H.; Chau, W.W.; Shi, B.; Chow, Simon K.; Yu, Fiona Y.P.; Lam, T.P.; Ng, Bobby K.W.; Qiu, Y.; Cheng, Jack C.Y.

    2017-01-01

    Background: Accurate skeletal maturity assessment is important to guide clinical evaluation of idiopathic scoliosis, but commonly used methods are inadequate or too complex for rapid clinical use. The objective of the study was to propose a new simplified staging method, called the thumb ossification composite index (TOCI), based on the ossification pattern of the 2 thumb epiphyses and the adductor sesamoid bone; to determine its accuracy in predicting skeletal maturation when compared with the Sanders simplified skeletal maturity system (SSMS); and to validate its interrater and intrarater reliability. Methods: Hand radiographs of 125 girls, acquired when they were newly diagnosed with idiopathic scoliosis prior to menarche and during longitudinal follow-up until skeletal maturity (a minimum of 4 years), were scored with the TOCI and SSMS. These scores were compared with digital skeletal age (DSA) and radius, ulna, and small hand bones (RUS) scores; anthropometric data; peak height velocity; and growth-remaining profiles. Correlations were analyzed with the chi-square test, Spearman and Cramer V correlation methods, and receiver operating characteristic curve analysis. Reliability analysis using the intraclass correlation (ICC) was conducted. Results: Six hundred and forty-five hand radiographs (average, 5 of each girl) were scored. The TOCI staging system was highly correlated with the DSA and RUS scores (r = 0.93 and 0.92, p < 0.01). The mean peak height velocity (and standard deviation) was 7.43 ± 1.45 cm/yr and occurred at a mean age of 11.9 ± 0.86 years, with 70.1% and 51.4% of the subjects attaining their peak height velocity at TOCI stage 5 and SSMS stage 3, respectively. The 2 systems predicted peak height velocity with comparable accuracy, with a strong Cramer V association (0.526 and 0.466, respectively; p < 0.01) and similar sensitivity and specificity on receiver operating characteristic curve analysis. The mean age at menarche was 12.57 ± 1.12 years, with menarche occurring over several stages in both the TOCI and the SSMS. The growth remaining predicted by TOCI stage 8 matched well with that predicted by SSMS stage 7, with a mean of <2 cm/yr of growth potential over a mean of <1.7 years at these stages. The TOCI also demonstrated excellent reliability, with an overall ICC of >0.97. Conclusions: The new proposed TOCI could provide a simplified staging system for the assessment of skeletal maturity of subjects with idiopathic scoliosis. The index needs to be subjected to further multicenter validation in different ethnic groups. PMID:28872525

  17. Comparison of various extraction techniques for the determination of polycyclic aromatic hydrocarbons in worms.

    PubMed

    Mooibroek, D; Hoogerbrugge, R; Stoffelsen, B H G; Dijkman, E; Berkhoff, C J; Hogendoorn, E A

    2002-10-25

    Two less laborious extraction methods, viz. (i) a simplified liquid extraction using light petroleum or (ii) microwave-assisted solvent extraction (MASE), for the analysis of polycyclic aromatic hydrocarbons (PAHs) in samples of the compost worm Eisenia andrei, were compared with a reference method. After extraction and concentration, analytical methodology consisted of a cleanup of (part) of the extract with high-performance gel permeation chromatography (HPGPC) and instrumental analysis of 15 PAHs with reversed-phase liquid chromatography with fluorescence detection (RPLC-FLD). Comparison of the methods was done by analysing samples with incurred residues (n=15, each method) originating from an experiment in which worms were exposed to a soil contaminated with PAHs. Simultaneously, the performance of the total lipid determination of each method was established. Evaluation of the data by means of principal component analysis (PCA) and analysis of variance (ANOVA) revealed that the performance of the light petroleum method for both the extraction of PAHs (concentration range 1-30 ng/g) and lipid content corresponds very well with the reference method. Compared to the reference method, the MASE method yielded somewhat lower concentrations for the less volatile PAHs, e.g., dibenzo[ah]anthracene and benzo[ghi]perylene and provided a significant higher amount of co-extracted material.

  18. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion.

    PubMed

    Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie

    2017-09-01

    The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.

  19. Development of Generation System of Simplified Digital Maps

    NASA Astrophysics Data System (ADS)

    Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng

    In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.

  20. A simplified solar cell array modelling program

    NASA Technical Reports Server (NTRS)

    Hughes, R. D.

    1982-01-01

    As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.

  1. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, A.

    1986-01-01

    The Finite Element - Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not directly produce reduced models for control design. To overcome these shortcomings, a modification of FETM method has been developed. The modified FETM method easily produces reduced models that are tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the procedures for output feedback, constrained compensation, and decentralized control. This semi annual report presents the development of the modified FETM, and through an example, illustrates its applicability to an output feedback and a decentralized control design.

  2. Effect of picric acid and enzymatic creatinine on the efficiency of the glomerular filtration rate predicator formula.

    PubMed

    Qiu, Ling; Guo, Xiuzhi; Zhu, Yan; Shou, Weilin; Gong, Mengchun; Zhang, Lin; Han, Huijuan; Quan, Guoqiang; Xu, Tao; Li, Hang; Li, Xuewang

    2013-01-01

    To investigate the impact of serum creatinine measurement on the applicability of glomerular filtration rate (GFR) evaluation equations. 99mTc-DTPA plasma clearance rate was used as GFR reference (rGFR) in patients with chronic kidney disease (CKD). Serum creatinine was measureded using enzymatic or picric acid creatinine reagent. The GFR of the patients were estimated using the Cockcroft-Gault equation corrected for body surface area, simplified Modification of Diet in Renal Disease (MDRD) equation, simplified MDRD equation corrected to isotopes dilution mass spectrometry, the CKD epidemiology collaborative research equation, and two Chinese simplified MDRD equations. Significant differences in the eGFR results estimated through enzymatic and picric acid methods were observed for the same evaluation equation. The intraclass correlation coefficient (ICC) of eGFR when the creatinine was measured by the picric acid method was significantly lower than that of the enzymatic method. The assessment accuracy of every equation using the enzymatic method to measure creatinine was significantly higher than that measured by the picric acid method when rGFR was > or = 60 mL/min/1.73m2. A significant difference was demonstrated in the same GFR evaluation equation using the picric acid and enzymatic methods. The enzymatic creatinine method was better than the picric acid method.

  3. The Effect of Simplifying Dental Implant Drilling Sequence on Osseointegration: An Experimental Study in Dogs

    PubMed Central

    Giro, Gabriela; Tovar, Nick; Marin, Charles; Bonfante, Estevam A.; Jimbo, Ryo; Suzuki, Marcelo; Janal, Malvin N.; Coelho, Paulo G.

    2013-01-01

    Objectives. To test the hypothesis that there would be no differences in osseointegration by reducing the number of drills for site preparation relative to conventional drilling sequence. Methods. Seventy-two implants were bilaterally placed in the tibia of 18 beagle dogs and remained for 1, 3, and 5 weeks. Thirty-six implants were 3.75 mm in diameter and the other 36 were 4.2 mm. Half of the implants of each diameter were placed under a simplified technique (pilot drill + final diameter drill) and the other half were placed under conventional drilling where multiple drills of increasing diameter were utilized. After euthanisation, the bone-implant samples were processed and referred to histological analysis. Bone-to-implant contact (BIC) and bone-area-fraction occupancy (BAFO) were assessed. Statistical analyses were performed by GLM ANOVA at 95% level of significance considering implant diameter, time in vivo, and drilling procedure as independent variables and BIC and BAFO as the dependent variables. Results. Both techniques led to implant integration. No differences in BIC and BAFO were observed between drilling procedures as time elapsed in vivo. Conclusions. The simplified drilling protocol presented comparable osseointegration outcomes to the conventional protocol, which proved the initial hypothesis. PMID:23431303

  4. Coefficient of Friction Patterns Can Identify Damage in Native and Engineered Cartilage Subjected to Frictional-Shear Stress

    PubMed Central

    Whitney, G. A.; Mansour, J. M.; Dennis, J. E.

    2015-01-01

    The mechanical loading environment encountered by articular cartilage in situ makes frictional-shear testing an invaluable technique for assessing engineered cartilage. Despite the important information that is gained from this testing, it remains under-utilized, especially for determining damage behavior. Currently, extensive visual inspection is required to assess damage; this is cumbersome and subjective. Tools to simplify, automate, and remove subjectivity from the analysis may increase the accessibility and usefulness of frictional-shear testing as an evaluation method. The objective of this study was to determine if the friction signal could be used to detect damage that occurred during the testing. This study proceeded in two phases: first, a simplified model of biphasic lubrication that does not require knowledge of interstitial fluid pressure was developed. In the second phase, frictional-shear tests were performed on 74 cartilage samples, and the simplified model was used to extract characteristic features from the friction signals. Using support vector machine classifiers, the extracted features were able to detect damage with a median accuracy of approximately 90%. The accuracy remained high even in samples with minimal damage. In conclusion, the friction signal acquired during frictional-shear testing can be used to detect resultant damage to a high level of accuracy. PMID:25691395

  5. Analysis and synthesis of bianisotropic metasurfaces by using analytical approach based on equivalent parameters

    NASA Astrophysics Data System (ADS)

    Danaeifar, Mohammad; Granpayeh, Nosrat

    2018-03-01

    An analytical method is presented to analyze and synthesize bianisotropic metasurfaces. The equivalent parameters of metasurfaces in terms of meta-atom properties and other specifications of metasurfaces are derived. These parameters are related to electric, magnetic, and electromagnetic/magnetoelectric dipole moments of the bianisotropic media, and they can simplify the analysis of complicated and multilayer structures. A metasurface of split ring resonators is studied as an example demonstrating the proposed method. The optical properties of the meta-atom are explored, and the calculated polarizabilities are applied to find the reflection coefficient and the equivalent parameters of the metasurface. Finally, a structure consisting of two metasurfaces of the split ring resonators is provided, and the proposed analytical method is applied to derive the reflection coefficient. The validity of this analytical approach is verified by full-wave simulations which demonstrate good accuracy of the equivalent parameter method. This method can be used in the analysis and synthesis of bianisotropic metasurfaces with different materials and in different frequency ranges by considering electric, magnetic, and electromagnetic/magnetoelectric dipole moments.

  6. Ca analysis: an Excel based program for the analysis of intracellular calcium transients including multiple, simultaneous regression analysis.

    PubMed

    Greensmith, David J

    2014-01-01

    Here I present an Excel based program for the analysis of intracellular Ca transients recorded using fluorescent indicators. The program can perform all the necessary steps which convert recorded raw voltage changes into meaningful physiological information. The program performs two fundamental processes. (1) It can prepare the raw signal by several methods. (2) It can then be used to analyze the prepared data to provide information such as absolute intracellular Ca levels. Also, the rates of change of Ca can be measured using multiple, simultaneous regression analysis. I demonstrate that this program performs equally well as commercially available software, but has numerous advantages, namely creating a simplified, self-contained analysis workflow. Copyright © 2013 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  7. DRG coding practice: a nationwide hospital survey in Thailand

    PubMed Central

    2011-01-01

    Background Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored. Objectives This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice. Methods A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis. Results SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention. Conclusion Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings. PMID:22040256

  8. Methods for determining the internal thrust of scramjet engine modules from experimental data

    NASA Technical Reports Server (NTRS)

    Voland, Randall T.

    1990-01-01

    Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.

  9. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  10. Controller design via structural reduced modeling by FETM

    NASA Technical Reports Server (NTRS)

    Yousuff, Ajmal

    1987-01-01

    The Finite Element-Transfer Matrix (FETM) method has been developed to reduce the computations involved in analysis of structures. This widely accepted method, however, has certain limitations, and does not address the issues of control design. To overcome these, a modification of the FETM method has been developed. The new method easily produces reduced models tailored toward subsequent control design. Other features of this method are its ability to: (1) extract open loop frequencies and mode shapes with less computations, (2) overcome limitations of the original FETM method, and (3) simplify the design procedures for output feedback, constrained compensation, and decentralized control. This report presents the development of the new method, generation of reduced models by this method, their properties, and the role of these reduced models in control design. Examples are included to illustrate the methodology.

  11. A Thermal Equilibrium Analysis of Line Contact Hydrodynamic Lubrication Considering the Influences of Reynolds Number, Load and Temperature

    PubMed Central

    Yu, Xiaoli; Sun, Zheng; Huang, Rui; Zhang, Yu; Huang, Yuqi

    2015-01-01

    Thermal effects such as conduction, convection and viscous dissipation are important to lubrication performance, and they vary with the friction conditions. These variations have caused some inconsistencies in the conclusions of different researchers regarding the relative contributions of these thermal effects. To reveal the relationship between the contributions of the thermal effects and the friction conditions, a steady-state THD analysis model was presented. The results indicate that the contribution of each thermal effect sharply varies with the Reynolds number and temperature. Convective effect could be dominant under certain conditions. Additionally, the accuracy of some simplified methods of thermo-hydrodynamic analysis is further discussed. PMID:26244665

  12. A Thermal Equilibrium Analysis of Line Contact Hydrodynamic Lubrication Considering the Influences of Reynolds Number, Load and Temperature.

    PubMed

    Yu, Xiaoli; Sun, Zheng; Huang, Rui; Zhang, Yu; Huang, Yuqi

    2015-01-01

    Thermal effects such as conduction, convection and viscous dissipation are important to lubrication performance, and they vary with the friction conditions. These variations have caused some inconsistencies in the conclusions of different researchers regarding the relative contributions of these thermal effects. To reveal the relationship between the contributions of the thermal effects and the friction conditions, a steady-state THD analysis model was presented. The results indicate that the contribution of each thermal effect sharply varies with the Reynolds number and temperature. Convective effect could be dominant under certain conditions. Additionally, the accuracy of some simplified methods of thermo-hydrodynamic analysis is further discussed.

  13. Simplifier: a web tool to eliminate redundant NGS contigs.

    PubMed

    Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur

    2012-01-01

    Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.

  14. Ablative Rayleigh Taylor instability in the limit of an infinitely large density ratio

    NASA Astrophysics Data System (ADS)

    Clavin, Paul; Almarcha, Christophe

    2005-05-01

    The instability of ablation fronts strongly accelerated toward the dense medium under the conditions of inertial confinement fusion (ICF) is addressed in the limit of an infinitely large density ratio. The analysis serves to demonstrate that the flow is irrotational to first order, reducing the nonlinear analysis to solve a two-potential flows problem. Vorticity appears at the following orders in the perturbation analysis. This result simplifies greatly the analysis. The possibility for using boundary integral methods opens new perspectives in the nonlinear theory of the ablative RT instability in ICF. A few examples are given at the end of the Note. To cite this article: P. Clavin, C. Almarcha, C. R. Mecanique 333 (2005).

  15. Investigations in a Simplified Bracketed Grid Approach to Metrical Structure

    ERIC Educational Resources Information Center

    Liu, Patrick Pei

    2010-01-01

    In this dissertation, I examine the fundamental mechanisms and assumptions of the Simplified Bracketed Grid Theory (Idsardi 1992) in two ways: first, by comparing it with Parametric Metrical Theory (Hayes 1995), and second, by implementing it in the analysis of several case studies in stress assignment and syllabification. Throughout these…

  16. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    USGS Publications Warehouse

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the future use of these commonly available ecological and statistical methods in preparing assemblage data for use in ecological indicators.

  17. Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, A.R.; Moreno, S.; Deringer, J.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels ofmore » detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less

  18. Forecast skill score assessment of a relocatable ocean prediction system, using a simplified objective analysis method

    NASA Astrophysics Data System (ADS)

    Onken, Reiner

    2017-11-01

    A relocatable ocean prediction system (ROPS) was employed to an observational data set which was collected in June 2014 in the waters to the west of Sardinia (western Mediterranean) in the framework of the REP14-MED experiment. The observational data, comprising more than 6000 temperature and salinity profiles from a fleet of underwater gliders and shipborne probes, were assimilated in the Regional Ocean Modeling System (ROMS), which is the heart of ROPS, and verified against independent observations from ScanFish tows by means of the forecast skill score as defined by Murphy(1993). A simplified objective analysis (OA) method was utilised for assimilation, taking account of only those profiles which were located within a predetermined time window W. As a result of a sensitivity study, the highest skill score was obtained for a correlation length scale C = 12.5 km, W = 24 h, and r = 1, where r is the ratio between the error of the observations and the background error, both for temperature and salinity. Additional ROPS runs showed that (i) the skill score of assimilation runs was mostly higher than the score of a control run without assimilation, (i) the skill score increased with increasing forecast range, and (iii) the skill score for temperature was higher than the score for salinity in the majority of cases. Further on, it is demonstrated that the vast number of observations can be managed by the applied OA method without data reduction, enabling timely operational forecasts even on a commercially available personal computer or a laptop.

  19. Improvements to direct quantitative analysis of multiple microRNAs facilitating faster analysis.

    PubMed

    Ghasemi, Farhad; Wegman, David W; Kanoatov, Mirzo; Yang, Burton B; Liu, Stanley K; Yousef, George M; Krylov, Sergey N

    2013-11-05

    Studies suggest that patterns of deregulation in sets of microRNA (miRNA) can be used as cancer diagnostic and prognostic biomarkers. Establishing a "miRNA fingerprint"-based diagnostic technique requires a suitable miRNA quantitation method. The appropriate method must be direct, sensitive, capable of simultaneous analysis of multiple miRNAs, rapid, and robust. Direct quantitative analysis of multiple microRNAs (DQAMmiR) is a recently introduced capillary electrophoresis-based hybridization assay that satisfies most of these criteria. Previous implementations of the method suffered, however, from slow analysis time and required lengthy and stringent purification of hybridization probes. Here, we introduce a set of critical improvements to DQAMmiR that address these technical limitations. First, we have devised an efficient purification procedure that achieves the required purity of the hybridization probe in a fast and simple fashion. Second, we have optimized the concentrations of the DNA probe to decrease the hybridization time to 10 min. Lastly, we have demonstrated that the increased probe concentrations and decreased incubation time removed the need for masking DNA, further simplifying the method and increasing its robustness. The presented improvements bring DQAMmiR closer to use in a clinical setting.

  20. A simplified method of walking track analysis to assess short-term locomotor recovery after acute spinal cord injury caused by thoracolumbar intervertebral disc extrusion in dogs.

    PubMed

    Song, R B; Oldach, M S; Basso, D M; da Costa, R C; Fisher, L C; Mo, X; Moore, S A

    2016-04-01

    The purpose of this study was to evaluate a simplified method of walking track analysis to assess treatment outcome in canine spinal cord injury. Measurements of stride length (SL) and base of support (BS) were made using a 'finger painting' technique for footprint analysis in all limbs of 20 normal dogs and 27 dogs with 28 episodes of acute thoracolumbar spinal cord injury (SCI) caused by spontaneous intervertebral disc extrusion. Measurements were determined at three separate time points in normal dogs and on days 3, 10 and 30 following decompressive surgery in dogs with SCI. Values for SL, BS and coefficient of variance (COV) for each parameter were compared between groups at each time point. Mean SL was significantly shorter in all four limbs of SCI-affected dogs at days 3, 10, and 30 compared to normal dogs. SL gradually increased toward normal in the 30 days following surgery. As measured by this technique, the COV-SL was significantly higher in SCI-affected dogs than normal dogs in both thoracic limbs (TL) and pelvic limbs (PL) only at day 3 after surgery. BS-TL was significantly wider in SCI-affected dogs at days 3, 10 and 30 following surgery compared to normal dogs. These findings support the use of footprint parameters to compare locomotor differences between normal and SCI-affected dogs, and to assess recovery from SCI. Additionally, our results underscore important changes in TL locomotion in thoracolumbar SCI-affected dogs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2013-01-01

    Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives

  2. Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.

    PubMed

    Tauber, J; Lahav, M

    1987-11-01

    A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.

  3. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  4. Discontinuous Galerkin Methods for NonLinear Differential Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Mansour, Nagi (Technical Monitor)

    2001-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.

  5. A simplified method for assessing particle deposition rate in aircraft cabins

    NASA Astrophysics Data System (ADS)

    You, Ruoyu; Zhao, Bin

    2013-03-01

    Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.

  6. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  7. A quasi-Lagrangian finite element method for the Navier-Stokes equations in a time-dependent domain

    NASA Astrophysics Data System (ADS)

    Lozovskiy, Alexander; Olshanskii, Maxim A.; Vassilevski, Yuri V.

    2018-05-01

    The paper develops a finite element method for the Navier-Stokes equations of incompressible viscous fluid in a time-dependent domain. The method builds on a quasi-Lagrangian formulation of the problem. The paper provides stability and convergence analysis of the fully discrete (finite-difference in time and finite-element in space) method. The analysis does not assume any CFL time-step restriction, it rather needs mild conditions of the form $\\Delta t\\le C$, where $C$ depends only on problem data, and $h^{2m_u+2}\\le c\\,\\Delta t$, $m_u$ is polynomial degree of velocity finite element space. Both conditions result from a numerical treatment of practically important non-homogeneous boundary conditions. The theoretically predicted convergence rate is confirmed by a set of numerical experiments. Further we apply the method to simulate a flow in a simplified model of the left ventricle of a human heart, where the ventricle wall dynamics is reconstructed from a sequence of contrast enhanced Computed Tomography images.

  8. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  9. Outline of cost-benefit analysis and a case study

    NASA Technical Reports Server (NTRS)

    Kellizy, A.

    1978-01-01

    The methodology of cost-benefit analysis is reviewed and a case study involving solar cell technology is presented. Emphasis is placed on simplifying the technique in order to permit a technical person not trained in economics to undertake a cost-benefit study comparing alternative approaches to a given problem. The role of economic analysis in management decision making is discussed. In simplifying the methodology it was necessary to restrict the scope and applicability of this report. Additional considerations and constraints are outlined. Examples are worked out to demonstrate the principles. A computer program which performs the computational aspects appears in the appendix.

  10. Multispectral assessment of skin malformations using a modified video-microscope

    NASA Astrophysics Data System (ADS)

    Bekina, A.; Diebele, I.; Rubins, U.; Zaharans, J.; Derjabo, A.; Spigulis, J.

    2012-10-01

    A simplified method is proposed for alternative clinical diagnostics of skin malformations. A modified digital microscope, additionally equipped with a fourcolour LED (450 nm, 545 nm, 660 nm and 940 nm) subsequent illumination system, was applied for assessment of skin cancerous lesions and cutaneous inflammations. Multispectral image analysis was performed to map distributions of skin erythema index, bilirubin index, melanoma/nevus differentiation parameter, and fluorescence indicator. The skin malformation monitoring has shown that it is possible to differentiate melanoma from other pathologies.

  11. Asymptotic approximations for pure bending of thin cylindrical shells

    NASA Astrophysics Data System (ADS)

    Coman, Ciprian D.

    2017-08-01

    A simplified partial wrinkling scenario for in-plane bending of thin cylindrical shells is explored by using several asymptotic strategies. The eighth-order boundary eigenvalue problem investigated here originates in the Donnel-Mushtari-Vlasov shallow shell theory coupled with a linear membrane pre-bifurcation state. It is shown that the corresponding neutral stability curve is amenable to a detailed asymptotic analysis based on the method of multiple scales. This is further complemented by an alternative WKB approximation that provides comparable information with significantly less effort.

  12. Explicit solutions of a gravity-induced film flow along a convectively heated vertical wall.

    PubMed

    Raees, Ammarah; Xu, Hang

    2013-01-01

    The gravity-driven film flow has been analyzed along a vertical wall subjected to a convective boundary condition. The Boussinesq approximation is applied to simplify the buoyancy term, and similarity transformations are used on the mathematical model of the problem under consideration, to obtain a set of coupled ordinary differential equations. Then the reduced equations are solved explicitly by using homotopy analysis method (HAM). The resulting solutions are investigated for heat transfer effects on velocity and temperature profiles.

  13. Analysis of the Characteristics of a Rotary Stepper Micromotor

    NASA Astrophysics Data System (ADS)

    Sone, Junji; Mizuma, Toshinari; Masunaga, Masakazu; Mochizuki, Shunsuke; Sarajic, Edin; Yamahata, Christophe; Fujita, Hiroyuki

    A 3-phase electrostatic stepper micromotor was developed. To improve its performance for actual use, we have conducted numerical simulation to optimize the design. An improved simulation method is needed for calculation of various cases. To conduct circuit simulation of this micromotor, its structure is simplified, and a function for computing the force excited by the electrostatic field is added to the circuit simulator. We achieved a reasonably accurate simulation. We also considered an optimal drive waveform to achieve low-voltage operation.

  14. Simplification of an MCNP model designed for dose rate estimation

    NASA Astrophysics Data System (ADS)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  15. Fluorescence recovery after photo-bleaching as a method to determine local diffusion coefficient in the stratum corneum.

    PubMed

    Anissimov, Yuri G; Zhao, Xin; Roberts, Michael S; Zvyagin, Andrei V

    2012-10-01

    Fluorescence recovery after photo-bleaching experiments were performed in human stratum corneum in vitro. Fluorescence multiphoton tomography was used, which allowed the dimensions of the photobleached volume to be at the micron scale and located fully within the lipid phase of the stratum corneum. Analysis of the fluorescence recovery data with simplified mathematical models yielded the diffusion coefficient of small molecular weight organic fluorescent dye Rhodamine B in the stratum corneum lipid phase of about (3-6) × 10(-9)cm(2) s(-1). It was concluded that the presented method can be used for detailed analysis of localised diffusion coefficients in the stratum corneum phases for various fluorescent probes. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Simplified RP-HPLC method for multi-residue analysis of abamectin, emamectin benzoate and ivermectin in rice.

    PubMed

    Xie, Xianchuan; Gong, Shu; Wang, Xiaorong; Wu, Yinxing; Zhao, Li

    2011-01-01

    A rapid, reliable and sensitive reverse-phase high-performance liquid chromatography method with fluorescence detection (RP-FLD-HPLC) was developed and validated for simultaneous analysis of the abamectin (ABA), emamectin (EMA) benzoate and ivermectin (IVM) residues in rice. After extraction with acetonitrile/water (2 : 1) with sonication, the avermectin (AVMs) residues were directly derivatised by N-methylimidazole (N-NMIM) and trifluoroacetic anhydride (TFAA) and then analysed on RP-FLD-HPLC. A good linear relationship (r(2 )> 0.99) was obtained for three AVMs ranging from 0.01 to 5 microg ml(-1), i.e. 0.01-5.0 microg g(-1) in rice matrix. The limit of detection (LOD) and the limit of quantification (LOQ) were between 0.001 and 0.002 microg g(-1) and between 0.004 and 0.006 microg g(-1), respectively. Recoveries were from 81.9% to 105.4% and precision less than 12.4%. The proposed method was successfully applied to routine analysis of the AVMs residues in rice.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daley, P F

    The overall objective of this project is the continued development, installation, and testing of continuous water sampling and analysis technologies for application to on-site monitoring of groundwater treatment systems and remediation sites. In a previous project, an on-line analytical system (OLAS) for multistream water sampling was installed at the Fort Ord Operable Unit 2 Groundwater Treatment System, with the objective of developing a simplified analytical method for detection of Compounds of Concern at that plant, and continuous sampling of up to twelve locations in the treatment system, from raw influent waters to treated effluent. Earlier implementations of the water samplingmore » and processing system (Analytical Sampling and Analysis Platform, A A+RT, Milpitas, CA) depended on off-line integrators that produced paper plots of chromatograms, and sent summary tables to a host computer for archiving. We developed a basic LabVIEW (National Instruments, Inc., Austin, TX) based gas chromatography control and data acquisition system that was the foundation for further development and integration with the ASAP system. Advantages of this integration include electronic archiving of all raw chromatographic data, and a flexible programming environment to support development of improved ASAP operation and automated reporting. The initial goals of integrating the preexisting LabVIEW chromatography control system with the ASAP, and demonstration of a simplified, site-specific analytical method were successfully achieved. However, although the principal objective of this system was assembly of an analytical system that would allow plant operators an up-to-the-minute view of the plant's performance, several obstacles remained. Data reduction with the base LabVIEW system was limited to peak detection and simple tabular output, patterned after commercial chromatography integrators, with compound retention times and peak areas. Preparation of calibration curves, method detection limit estimates and trend plotting were performed with spreadsheets and statistics software. Moreover, the analytical method developed was very limited in compound coverage, and unable to closely mirror the standard analytical methods promulgated by the EPA. To address these deficiencies, during this award the original equipment was operated at the OU 2-GTS to further evaluate the use of columns, commercial standard blends and other components to broaden the compound coverage of the chromatography system. A second-generation ASAP was designed and built to replace the original system at the OU 2-GTS, and include provision for introduction of internal standard compounds and surrogates into each sample analyzed. An enhanced, LabVIEW based chromatogram analysis application was written, that manages and archives chemical standards information, and provides a basis for NIST traceability for all analyses. Within this same package, all compound calibration response curves are managed, and different report formats were incorporated, that simplify trend analysis. Test results focus on operation of the original system at the OU 1 Integrated Chemical and Flow Monitoring System, at the OU 1 Fire Drill Area remediation site.« less

  18. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  19. Inclusion of Structural Flexibility in Design Load Analysis for Wave Energy Converters: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yi; Yu, Yi-Hsiang; van Rij, Jennifer A

    2017-08-14

    Hydroelastic interactions, caused by ocean wave loading on wave energy devices with deformable structures, are studied in the time domain. A midfidelity, hybrid modeling approach of rigid-body and flexible-body dynamics is developed and implemented in an open-source simulation tool for wave energy converters (WEC-Sim) to simulate the dynamic responses of wave energy converter component structural deformations under wave loading. A generalized coordinate system, including degrees of freedom associated with rigid bodies, structural modes, and constraints connecting multiple bodies, is utilized. A simplified method of calculating stress loads and sectional bending moments is implemented, with the purpose of sizing and designingmore » wave energy converters. Results calculated using the method presented are verified with those of high-fidelity fluid-structure interaction simulations, as well as low-fidelity, frequency-domain, boundary element method analysis.« less

  20. Improvements in soft gelatin capsule sample preparation for USP-based simethicone FTIR analysis.

    PubMed

    Hargis, Amy D; Whittall, Linda B

    2013-02-23

    Due to the absence of a significant chromophore, Simethicone raw material and finished product analysis is achieved using a FTIR-based method that quantifies the polydimethylsiloxane (PDMS) component of the active ingredient. The method can be found in the USP monographs for several dosage forms of Simethicone-containing pharmaceutical products. For soft gelatin capsules, the PDMS assay values determined using the procedure described in the USP method were variable (%RSDs from 2 to 9%) and often lower than expected based on raw material values. After investigation, it was determined that the extraction procedure used for sample preparation was causing loss of material to the container walls due to the hydrophobic nature of PDMS. Evaluation revealed that a simple dissolution of the gelatin capsule fill in toluene provided improved assay results (%RSDs≤0.5%) as well as a simplified and rapid sample preparation. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Development and Validation of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury

    DTIC Science & Technology

    2018-03-01

    of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury. PRINCIPAL...and methods, results - include tables/figures, and conclusions/applications.) Objectives/Background: Acute kidney injury (AKI) is a serious

  2. A Simplified Technique for Evaluating Human "CCR5" Genetic Polymorphism

    ERIC Educational Resources Information Center

    Falteisek, Lukáš; Cerný, Jan; Janštová, Vanda

    2013-01-01

    To involve students in thinking about the problem of AIDS (which is important in the view of nondecreasing infection rates), we established a practical lab using a simplified adaptation of Thomas's (2004) method to determine the polymorphism of HIV co-receptor CCR5 from students' own epithelial cells. CCR5 is a receptor involved in inflammatory…

  3. Environmental dynamics at orbital altitudes

    NASA Technical Reports Server (NTRS)

    Karr, G. R.

    1976-01-01

    The influence of real satellite aerodynamics on the determination of upper atmospheric density was investigated. A method of analysis of satellite drag data is presented which includes the effect of satellite lift and the variation in aerodynamic properties around the orbit. The studies indicate that satellite lift may be responsible for the observed orbit precession rather than a super rotation of the upper atmosphere. The influence of simplifying assumptions concerning the aerodynamics of objects in falling sphere analysis were evaluated and an improved method of analysis was developed. Wind tunnel data was used to develop more accurate drag coefficient relationships for studying altitudes between 80 and 120 Km. The improved drag coefficient relationships revealed a considerable error in previous falling sphere drag interpretation. These data were reanalyzed using the more accurate relationships. Theoretical investigations of the drag coefficient in the very low speed ratio region were also conducted.

  4. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  5. The financial viability of an SOFC cogeneration system in single-family dwellings

    NASA Astrophysics Data System (ADS)

    Alanne, Kari; Saari, Arto; Ugursal, V. Ismet; Good, Joel

    In the near future, fuel cell-based residential micro-CHP systems will compete with traditional methods of energy supply. A micro-CHP system may be considered viable if its incremental capital cost compared to its competitors equals to cumulated savings during a given period of time. A simplified model is developed in this study to estimate the operation of a residential solid oxide fuel cell (SOFC) system. A comparative assessment of the SOFC system vis-à-vis heating systems based on gas, oil and electricity is conducted using the simplified model for a single-family house located in Ottawa and Vancouver. The energy consumption of the house is estimated using the HOT2000 building simulation program. A financial analysis is carried out to evaluate the sensitivity of the maximum allowable capital cost with respect to system sizing, acceptable payback period, energy price and the electricity buyback strategy of an energy utility. Based on the financial analysis, small (1-2 kW e) SOFC systems seem to be feasible in the considered case. The present study shows also that an SOFC system is especially an alternative to heating systems based on oil and electrical furnaces.

  6. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE PAGES

    Luce, Timothy C.

    2017-02-23

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  7. A simplified application of the method of operators to the calculation of disturbed motions of an airplane

    NASA Technical Reports Server (NTRS)

    Jones, Robert T

    1937-01-01

    A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.

  8. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luce, Timothy C.

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  9. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  10. Simplified Parallel Domain Traversal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep bymore » performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.« less

  11. State space approach to mixed boundary value problems.

    NASA Technical Reports Server (NTRS)

    Chen, C. F.; Chen, M. M.

    1973-01-01

    A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.

  12. Numerical simulation of water evaporation inside vertical circular tubes

    NASA Astrophysics Data System (ADS)

    Ocłoń, Paweł; Nowak, Marzena; Majewski, Karol

    2013-10-01

    In this paper the results of simplified numerical analysis of water evaporation in vertical circular tubes are presented. The heat transfer in fluid domain (water or wet steam) and solid domain (tube wall) is analyzed. For the fluid domain the temperature field is calculated solving energy equation using the Control Volume Method and for the solid domain using the Finite Element Method. The heat transfer between fluid and solid domains is conjugated using the value of heat transfer coefficient from evaporating liquid to the tube wall. It is determined using the analytical Steiner-Taborek correlation. The pressure changes in fluid are computed using Friedel model.

  13. DIGE compatible labelling of surface proteins on vital cells in vitro and in vivo.

    PubMed

    Mayrhofer, Corina; Krieger, Sigurd; Allmaier, Günter; Kerjaschki, Dontscho

    2006-01-01

    Efficient methods for profiling of the cell surface proteome are desirable to get a deeper insight in basic biological processes, to localise proteins and to uncover proteins differentially expressed in diseases. Here we present a strategy to target cell surface exposed proteins via fluorescence labelling using CyDye DIGE fluors. This method has been applied to human cell lines in vitro as well as to a complex biological system in vivo. It allows detection of fluorophore-tagged cell surface proteins and visualisation of the accessible proteome within a single 2-D gel, simplifying subsequent UV MALDI-MS analysis.

  14. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  15. A novel simplified model for torsional vibration analysis of a series-parallel hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Tang, Xiaolin; Yang, Wei; Hu, Xiaosong; Zhang, Dejiu

    2017-02-01

    In this study, based on our previous work, a novel simplified torsional vibration dynamic model is established to study the torsional vibration characteristics of a compound planetary hybrid propulsion system. The main frequencies of the hybrid driveline are determined. In contrast to vibration characteristics of the previous 16-degree of freedom model, the simplified model can be used to accurately describe the low-frequency vibration property of this hybrid powertrain. This study provides a basis for further vibration control of the hybrid powertrain during the process of engine start/stop.

  16. Innovative Techniques Simplify Vibration Analysis

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the early years of development, Marshall Space Flight Center engineers encountered challenges related to components in the space shuttle main engine. To assess the problems, they evaluated the effects of vibration and oscillation. To enhance the method of vibration signal analysis, Marshall awarded Small Business Innovation Research (SBIR) contracts to AI Signal Research, Inc. (ASRI), in Huntsville, Alabama. ASRI developed a software package called PC-SIGNAL that NASA now employs on a daily basis, and in 2009, the PKP-Module won Marshall s Software of the Year award. The technology is also used in many industries: aircraft and helicopter, rocket engine manufacturing, transportation, and nuclear power."

  17. New solitary wave and multiple soliton solutions for fifth order nonlinear evolution equation with time variable coefficients

    NASA Astrophysics Data System (ADS)

    Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.

    2018-03-01

    In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.

  18. Improved method for the extraction and chromatographic analysis on a fused-core column of ellagitannins found in oak-aged wine.

    PubMed

    Navarro, María; Kontoudakis, Nikolaos; Canals, Joan Miquel; García-Romero, Esteban; Gómez-Alonso, Sergio; Zamora, Fernando; Hermosín-Gutiérrez, Isidro

    2017-07-01

    A new method for the analysis of ellagitannins observed in oak-aged wine is proposed, exhibiting interesting advantages with regard to previously reported analytical methods. The necessary extraction of ellagitannins from wine was simplified to a single step of solid phase extraction (SPE) using size exclusion chromatography with Sephadex LH-20 without the need for any previous SPE of phenolic compounds using reversed-phase materials. The quantitative recovery of wine ellagitannins requires a combined elution with methanol and ethyl acetate, especially for increasing the recovery of the less polar acutissimins. The chromatographic method was performed using a fused-core C18 column, thereby avoiding the coelution of main ellagitannins, such as vescalagin and roburin E. However, the very polar ellagitannins, namely, the roburins A, B and C, still partially coeluted, and their quantification was assisted by the MS detector. This methodology also enabled the analysis of free gallic and ellagic acids in the same chromatographic run. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Simplified transient isotachophoresis/capillary gel electrophoresis method for highly sensitive analysis of polymerase chain reaction samples on a microchip with laser-induced fluorescence detection.

    PubMed

    Liu, Dayu; Ou, Ziyou; Xu, Mingfei; Wang, Lihui

    2008-12-19

    We present a sensitive, simple and robust on-chip transient isotachophoresis/capillary gel electrophoresis (tITP/CGE) method for the analysis of polymerase chain reaction (PCR) samples. Using chloride ions in the PCR buffer and N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid (HEPES) in the background electrolyte, respectively, as the leading and terminating electrolytes, the tITP preconcentration was coupled with CGE separation with double-T shaped channel network. The tITP/CGE separation was carried out with a single running buffer. The separation process involved only two steps that were performed continuously with the sequential switching of four voltage outputs. The tITP/CGE method showed an analysis time and a separation efficiency comparable to those of standard CGE, while the signal intensity was enhanced by factors of over 20. The limit of detection of the chip-based tITP/CGE method was estimated to be 1.1 ng/mL of DNA in 1x PCR buffer using confocal fluorescence detection following 473 nm laser excitation.

  20. New approaches for calculating Moran's index of spatial autocorrelation.

    PubMed

    Chen, Yanguang

    2013-01-01

    Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.

  1. Simplification of a scoring system maintained overall accuracy but decreased the proportion classified as low risk.

    PubMed

    Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul

    2016-01-01

    Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Two new oro-cervical radiographic indexes for chronological age estimation: a pilot study on an Italian population.

    PubMed

    Lajolo, Carlo; Giuliani, Michele; Cordaro, Massimo; Marigo, Luca; Marcelli, Antonio; Fiorillo, Fabio; Pascali, Vincenzo L; Oliva, Antonio

    2013-10-01

    Chronological age (CA) plays a fundamental role in forensic dentistry (i.e. personal identification and evaluation of imputability). Even though several studies outlined the association between biological and chronological age, there is still great variability in the estimates. The aim of this study was to determine the possible correlation between biological and CA age through the use of two new radiographic indexes (Oro-Cervical Radiographic Simplified Score - OCRSS and Oro-Cervical Radiographic Simplified Score Without Wisdom Teeth - OCRSSWWT) that are based on the oro-cervical area. Sixty Italian Caucasian individuals were divided into 3 groups according to their CA: Group 1: CAG 1 = 8-14 yr; Group 2: CAG 2 = 14-18 yr; Group 3: CAG 3 = 18-25 yr; panorexes and standardised cephalograms were evaluated according Demirjian's Method for dental age calculation (DM), Cervical Vertebral Maturation method for skeletal age calculation (CVMS) and Third Molar Development for age estimation (TMD). The stages of each method were simplified in order to generate OCRSS, which summarized the simplified scores of the three methods, and OCRSSWWT, which summarized the simplified DM and CVMS scores. There was a significant correlation between OCRSS and CAGs (Slope = 0.954, p < 0.001, R-squared = 0.79) and between OCRSSWWT and CAGs (Slope = 0.863, p < 0.001, R-squared = 0.776). Even though the indexes, especially OCRSS, appear to be highly reliable, growth variability among individuals can deeply influence the anatomical changes from childhood to adulthood. A multi-disciplinary approach that considers many different biomarkers could help make radiological age determination more reliable when it is used to predict CA. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  3. Application of the principal fractional meta-trigonometric functions for the solution of linear commensurate-order time-invariant fractional differential equations.

    PubMed

    Lorenzo, C F; Hartley, T T; Malti, R

    2013-05-13

    A new and simplified method for the solution of linear constant coefficient fractional differential equations of any commensurate order is presented. The solutions are based on the R-function and on specialized Laplace transform pairs derived from the principal fractional meta-trigonometric functions. The new method simplifies the solution of such fractional differential equations and presents the solutions in the form of real functions as opposed to fractional complex exponential functions, and thus is directly applicable to real-world physics.

  4. Transosseous fixation of pediatric displaced mandibular fractures with polyglactin resorbable suture--a simplified technique.

    PubMed

    Chandan, Sanjay; Halli, Rajshekhar; Joshi, Samir; Chhabaria, Gaurav; Setiya, Sneha

    2013-11-01

    Management of pediatric mandibular fractures presents a unique challenge to surgeons in terms of its numerous variations compared to adults. Both conservative and open methods have been advocated with their obvious limitations and complications. However, conservative modalities may not be possible in grossly displaced fractures, which necessitate the open method of fixation. We present a novel and simplified technique of transosseous fixation of displaced pediatric mandibular fractures with polyglactin resorbable suture, which provides adequate stability without any interference with tooth buds and which is easy to master.

  5. Approximate method for calculating free vibrations of a large-wind-turbine tower structure

    NASA Technical Reports Server (NTRS)

    Das, S. C.; Linscott, B. S.

    1977-01-01

    A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.

  6. Simplified solution for point contact deformation between two elastic solids

    NASA Technical Reports Server (NTRS)

    Brewe, D. E.; Hamrock, B. J.

    1976-01-01

    A linear-regression by the method of least squares is made on the geometric variables that occur in the equation for point contact deformation. The ellipticity and the complete eliptic integrals of the first and second kind are expressed as a function of the x, y-plane principal radii. The ellipticity was varied from 1 (circular contact) to 10 (a configuration approaching line contact). These simplified equations enable one to calculate easily the point-contact deformation to within 3 percent without resorting to charts or numerical methods.

  7. Simplified form of tinnitus retraining therapy in adults: a retrospective study

    PubMed Central

    Aazh, Hashir; Moore, Brian CJ; Glasberg, Brian R

    2008-01-01

    Background Since the first description of tinnitus retraining therapy (TRT), clinicians have modified and customised the method of TRT in order to suit their practice and their patients. A simplified form of TRT is used at Ealing Primary Care Trust Audiology Department. Simplified TRT is different from TRT in the type and (shorter) duration of the counseling but is similar to TRT in the application of sound therapy except for patients exhibiting tinnitus with no hearing loss and no decreased sound tolerance (wearable sound generators were not mandatory or recommended here, whereas they are for TRT). The main goal of this retrospective study was to assess the efficacy of simplified TRT. Methods Data were collected from a series of 42 consecutive patients who underwent simplified TRT for a period of 3 to 23 months. Perceived tinnitus handicap was measured by the Tinnitus Handicap Inventory (THI) and perceived tinnitus loudness, annoyance and the effect of tinnitus on life were assessed through the Visual Analog Scale (VAS). Results The mean THI and VAS scores were significantly decreased after 3 to 23 months of treatment. The mean decline of the THI score was 45 (SD = 22) and the difference between pre- and post-treatment scores was statistically significant. The mean decline of the VAS scores was 1.6 (SD = 2.1) for tinnitus loudness, 3.6 (SD = 2.6) for annoyance, and 3.9 (SD = 2.3) for effect on life. The differences between pre- and post-treatment VAS scores were statistically significant for tinnitus loudness, annoyance, and effect on life. The decline of THI scores was not significantly correlated with age and duration of tinnitus. Conclusion The results suggest that benefit may be obtained from a substantially simplified form of TRT. PMID:18980672

  8. Spectrum auto-correlation analysis and its application to fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.

    2013-12-01

    Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.

  9. Assessment of railway wagon suspension characteristics

    NASA Astrophysics Data System (ADS)

    Soukup, Josef; Skočilas, Jan; Skočilasová, Blanka

    2017-05-01

    The article deals with assessment of railway wagon suspension characteristics. The essential characteristics of a suspension are represented by the stiffness constants of the equivalent springs and the eigen frequencies of the oscillating movements in reference to the main central inertia axes of a vehicle. The premise of the experimental determination of these characteristic is the knowledge of the gravity center position and the knowledge of the main central inertia moments of the vehicle frame. The vehicle frame performs the general spatial movement when the vehicle moves. An analysis of the frame movement generally arises from Euler's equations which are commonly used for the description of the spherical movement. This solution is difficult and it can be simplified by applying the specific assumptions. The eigen frequencies solutions and solutions of the suspension stiffness are presented in the article. The solutions are applied on the railway and road vehicles with the simplifying conditions. A new method which assessed the characteristics is described in the article.

  10. Milrinone therapeutic drug monitoring in a pediatric population: Development and validation of a quantitative liquid chromatography-tandem mass spectrometry method.

    PubMed

    Raizman, Joshua E; Taylor, Katherine; Parshuram, Christopher; Colantonio, David A

    2017-05-01

    Milrinone is a potent selective phosphodiesterase type III inhibitor which stimulates myocardial function and improves myocardial relaxation. Although therapeutic monitoring is crucial to maintain therapeutic outcome, little data is available. A proof-of-principle study has been initiated in our institution to evaluate the clinical impact of optimizing milrinone dosing through therapeutic drug monitoring (TDM) in children following cardiac surgery. We developed a robust LC-MS/MS method to quantify milrinone in serum from pediatric patients in real-time. A liquid-liquid extraction procedure was used to prepare samples for analysis prior to measurement by LC-MS/MS. Performance characteristics, such as linearity, limit of quantitation (LOQ) and precision, were assessed. Patient samples were acquired post-surgery and analyzed to determine the concentration-time profile of the drug as well as to track turn-around-times. Within day precision was <8.3% across 3 levels of QC. Between-day precision was <12%. The method was linear from 50 to 800μg/l; the lower limit of quantification was 22μg/l. Comparison with another LC-MS/MS method showed good agreement. Using this simplified method, turnaround times within 3-6h were achievable, and patient drug profiles demonstrated that some milrinone levels were either sub-therapeutic or in the toxic range, highlighting the importance for milrinone TDM. This simplified and quick method proved to be analytically robust and able to provide therapeutic monitoring of milrinone in real-time in patients post-cardiac surgery. Copyright © 2017. Published by Elsevier B.V.

  11. Simplified model of mean double step (MDS) in human body movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando

    In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.

  12. Toward a Definition of the Engineering Method.

    ERIC Educational Resources Information Center

    Koen, Billy V.

    1988-01-01

    Describes a preliminary definition of engineering method as well as a definition and examples of engineering heuristics. After discussing some alternative definitions of the engineering method, a simplified definition of the engineering method is suggested. (YP)

  13. An Investigation to Determine if Higher Speeds are Obtained with the Diamond Jubilee Gregg Shorthand Method.

    ERIC Educational Resources Information Center

    Starbuck, Ethel

    The purpose of the study was to determine whether higher shorthand speeds were achieved by high school students in a 1-year shorthand course through the use of Simplified Gregg Shorthand or through the use of Diamond Jubilee (DJ) Gregg Shorthand. The control group consisted of 75 students enrolled in Simplified Shorthand during the years…

  14. Nonstandard and Higher-Order Finite-Difference Methods for Electromagnetics

    DTIC Science & Technology

    2009-10-26

    Simplified Fuselage filled with 90 passengers. . . . . . . . . 135 4.4. A top view photograph of the expanded polystyrene passenger support, and the... expanded polystyrene supports. . . . . . . . . . . . . . . . . . . . . . . 140 4.10. Measured S11 (the exterior antenna) of the simplified fuselage...escape. To keep the passengers in their designated locations and upright, an expanded polystyrene support system was made. In a sheet of 1” thick

  15. Vibration analysis of the maglev guideway with the moving load

    NASA Astrophysics Data System (ADS)

    Wang, H. P.; Li, J.; Zhang, K.

    2007-09-01

    The response of the guideway induced by moving maglev vehicle is investigated in this paper. The maglev vehicle is simplified as evenly distributed force acting on the guideway at constant speed. According to the experimental line, the guideway structure of rail-sleeper-bridge is simplified as Bernoulli-Euler (B-E) beam—evenly distributed spring—simply supported B-E beam structure; thus, double deck model of the maglev guideway is constructed which can more accurately reflect the dynamic characteristic of the experimental line. The natural frequency and mode are deduced based on the theoretical model. The relationship between structural parameters and natural frequency are exploited by employing the numerical calculation method. The way to suppress the vehicle-guideway interaction by regulating the structural parameter is also discussed here. Using the normal coordinate transformation method, the coupled differential equations of motion of the maglev guideway are converted into a set of uncoupled equations. The closed-form solutions for the response of the guideway subjecting the moving load are derived. It is noted that the moving load would not induce the vehicle-guideway interaction oscillation. The analysis of the guideway impact factor implies that at some position of the guideway, the deflection may decrease with the increase of the speed of the load; several extreme value of the guideway displacement will appear induced by different speeds, with different acting place, the speeds are different either. The final numerical simulation verifies these conclusions.

  16. The suitability of the simplified method of the analysis of coffee infusions on the content of Ca, Cu, Fe, Mg, Mn and Zn and the study of the effect of preparation conditions on the leachability of elements into the coffee brew.

    PubMed

    Stelmach, Ewelina; Pohl, Pawel; Szymczycha-Madeja, Anna

    2013-12-01

    A fast and straightforward method of the analysis of coffee infusions was developed for measurements of total concentrations of Ca, Cu, Fe, Mg, Mn and Zn by flame atomic absorption spectrometry. Its validity was proved by the analysis of spiked samples; recoveries of added metals were found to be within 98-104% while the precision was better than 4%. The method devised was used for the analysis of re-distilled water infusions of six popular ground coffees available in the Polish market. Using the mud coffee preparation it was established that percentages of metals leached in these conditions varied a lot among analysed coffees, especially for Ca (14-42%), Mg (6-25%) and Zn (1-24%). For remaining metals, the highest extractabilities were assessed for Mn (30-52%) while the lowest for Fe (4-16%) and Cu (2-12%). In addition, it was found that the water type and the coffee brewing preparation method influence the concentration of studied metals in coffee infusions the most. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A volumetric conformal mapping approach for clustering white matter fibers in the brain

    PubMed Central

    Gupta, Vikash; Prasad, Gautam; Thompson, Paul

    2017-01-01

    The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252

  18. Aircraft wing weight build-up methodology with modification for materials and construction techniques

    NASA Technical Reports Server (NTRS)

    York, P.; Labell, R. W.

    1980-01-01

    An aircraft wing weight estimating method based on a component buildup technique is described. A simplified analytically derived beam model, modified by a regression analysis, is used to estimate the wing box weight, utilizing a data base of 50 actual airplane wing weights. Factors representing materials and methods of construction were derived and incorporated into the basic wing box equations. Weight penalties to the wing box for fuel, engines, landing gear, stores and fold or pivot are also included. Methods for estimating the weight of additional items (secondary structure, control surfaces) have the option of using details available at the design stage (i.e., wing box area, flap area) or default values based on actual aircraft from the data base.

  19. Qualification of computerized monitoring systems in a cell therapy facility compliant with the good manufacturing practices.

    PubMed

    Del Mazo-Barbara, Anna; Mirabel, Clémentine; Nieto, Valentín; Reyes, Blanca; García-López, Joan; Oliver-Vila, Irene; Vives, Joaquim

    2016-09-01

    Computerized systems (CS) are essential in the development and manufacture of cell-based medicines and must comply with good manufacturing practice, thus pushing academic developers to implement methods that are typically found within pharmaceutical industry environments. Qualitative and quantitative risk analyses were performed by Ishikawa and Failure Mode and Effects Analysis, respectively. A process for qualification of a CS that keeps track of environmental conditions was designed and executed. The simplicity of the Ishikawa analysis permitted to identify critical parameters that were subsequently quantified by Failure Mode Effects Analysis, resulting in a list of test included in the qualification protocols. The approach presented here contributes to simplify and streamline the qualification of CS in compliance with pharmaceutical quality standards.

  20. Easy-to-learn cardiopulmonary resuscitation training programme: a randomised controlled trial on laypeople’s resuscitation performance

    PubMed Central

    Ko, Rachel Jia Min; Lim, Swee Han; Wu, Vivien Xi; Leong, Tak Yam; Liaw, Sok Ying

    2018-01-01

    INTRODUCTION Simplifying the learning of cardiopulmonary resuscitation (CPR) is advocated to improve skill acquisition and retention. A simplified CPR training programme focusing on continuous chest compression, with a simple landmark tracing technique, was introduced to laypeople. The study aimed to examine the effectiveness of the simplified CPR training in improving lay rescuers’ CPR performance as compared to standard CPR. METHODS A total of 85 laypeople (aged 21–60 years) were recruited and randomly assigned to undertake either a two-hour simplified or standard CPR training session. They were tested two months after the training on a simulated cardiac arrest scenario. Participants’ performance on the sequence of CPR steps was observed and evaluated using a validated CPR algorithm checklist. The quality of chest compression and ventilation was assessed from the recording manikins. RESULTS The simplified CPR group performed significantly better on the CPR algorithm when compared to the standard CPR group (p < 0.01). No significant difference was found between the groups in time taken to initiate CPR. However, a significantly higher number of compressions and proportion of adequate compressions was demonstrated by the simplified group than the standard group (p < 0.01). Hands-off time was significantly shorter in the simplified CPR group than in the standard CPR group (p < 0.001). CONCLUSION Simplifying the learning of CPR by focusing on continuous chest compressions, with simple hand placement for chest compression, could lead to better acquisition and retention of CPR algorithms, and better quality of chest compressions than standard CPR. PMID:29167910

  1. Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.

    PubMed

    He, A; Deepan, B; Quan, C

    2017-09-01

    A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.

  2. A simplified focusing and astigmatism correction method for a scanning electron microscope

    NASA Astrophysics Data System (ADS)

    Lu, Yihua; Zhang, Xianmin; Li, Hai

    2018-01-01

    Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.

  3. A simplified fourwall interference assessment procedure for airfoil data obtained in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1987-01-01

    A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.

  4. simplified aerosol representations in global modeling

    NASA Astrophysics Data System (ADS)

    Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip

    2015-04-01

    The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.

  5. Data-driven and hybrid coastal morphological prediction methods for mesoscale forecasting

    NASA Astrophysics Data System (ADS)

    Reeve, Dominic E.; Karunarathna, Harshinie; Pan, Shunqi; Horrillo-Caraballo, Jose M.; Różyński, Grzegorz; Ranasinghe, Roshanka

    2016-03-01

    It is now common for coastal planning to anticipate changes anywhere from 70 to 100 years into the future. The process models developed and used for scheme design or for large-scale oceanography are currently inadequate for this task. This has prompted the development of a plethora of alternative methods. Some, such as reduced complexity or hybrid models simplify the governing equations retaining processes that are considered to govern observed morphological behaviour. The computational cost of these models is low and they have proven effective in exploring morphodynamic trends and improving our understanding of mesoscale behaviour. One drawback is that there is no generally agreed set of principles on which to make the simplifying assumptions and predictions can vary considerably between models. An alternative approach is data-driven techniques that are based entirely on analysis and extrapolation of observations. Here, we discuss the application of some of the better known and emerging methods in this category to argue that with the increasing availability of observations from coastal monitoring programmes and the development of more sophisticated statistical analysis techniques data-driven models provide a valuable addition to the armoury of methods available for mesoscale prediction. The continuation of established monitoring programmes is paramount, and those that provide contemporaneous records of the driving forces and the shoreline response are the most valuable in this regard. In the second part of the paper we discuss some recent research that combining some of the hybrid techniques with data analysis methods in order to synthesise a more consistent means of predicting mesoscale coastal morphological evolution. While encouraging in certain applications a universally applicable approach has yet to be found. The route to linking different model types is highlighted as a major challenge and requires further research to establish its viability. We argue that key elements of a successful solution will need to account for dependencies between driving parameters, (such as wave height and tide level), and be able to predict step changes in the configuration of coastal systems.

  6. Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, A.R.; Moreno, S.; Deringer, J.

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city hasmore » been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less

  7. Evaluation of Several Approximate Methods for Calculating the Symmetrical Bending-Moment Response of Flexible Airplanes to Isotropic Atmospheric Turbulence

    NASA Technical Reports Server (NTRS)

    Bennett, Floyd V.; Yntema, Robert T.

    1959-01-01

    Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.

  8. CADDIS Volume 4. Data Analysis: Basic Analyses

    EPA Pesticide Factsheets

    Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.

  9. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  10. Scombroid poisoning: a review.

    PubMed

    Hungerford, James M

    2010-08-15

    Scombroid poisoning, also called histamine fish poisoning, is an allergy-like form of food poisoning that continues to be a major problem in seafood safety. The exact role of histamine in scombroid poisoning is not straightforward. Deviations from the expected dose-response have led to the advancement of various possible mechanisms of toxicity, none of them proven. Histamine action levels are used in regulation until more is known about the mechanism of scombroid poisoning. Scombroid poisoning and histamine are correlated but complicated. Victims of scombroid poisoning respond well to antihistamines, and chemical analyses of fish implicated in scombroid poisoning generally reveal elevated levels of histamine. Scombroid poisoning is unique among the seafood toxins since it results from product mishandling rather than contamination from other trophic levels. Inadequate cooling following harvest promotes bacterial histamine production, and can result in outbreaks of scombroid poisoning. Fish with high levels of free histidine, the enzyme substrate converted to histamine by bacterial histidine decarboxylase, are those most often implicated in scombroid poisoning. Laboratory methods and screening methods for detecting histamine are available in abundance, but need to be compared and validated to harmonize testing. Successful field testing, including dockside or on-board testing needed to augment HACCP efforts will have to integrate rapid and simplified detection methods with simplified and rapid sampling and extraction. Otherwise, time-consuming sample preparation reduces the impact of gains in detection speed on the overall analysis time. Published by Elsevier Ltd.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuzmina, L.K.

    The research deals with different aspects of mathematical modelling and the analysis of complex dynamic non-linear systems as a consequence of applied problems in mechanics (in particular those for gyrosystems, for stabilization and orientation systems, control systems of movable objects, including the aviation and aerospace systems) Non-linearity, multi-connectedness and high dimensionness of dynamical problems, that occur at the initial full statement lead to the need of the problem narrowing, and of the decomposition of the full model, but with safe-keeping of main properties and of qualitative equivalence. The elaboration of regular methods for modelling problems in dynamics, the generalization ofmore » reduction principle are the main aims of the investigations. Here, uniform methodology, based on Lyapunov`s methods, founded by N.G.Ohetayev, is developed. The objects of the investigations are considered with exclusive positions, as systems of singularly perturbed class, treated as ones with singular parametrical perturbations. It is the natural extension of the statements of N.G.Chetayev and P.A.Kuzmin for parametrical stability. In paper the systematical procedures for construction of correct simplified models (comparison ones) are developed, the validity conditions of the transition are determined the appraisals are received, the regular algorithms of engineering level are obtained. Applicabilitelly to the stabilization and orientation systems with the gyroscopic controlling subsystems, these methods enable to build the hierarchical sequence of admissible simplified models; to determine the conditions of their correctness.« less

  12. Visualizing BPA by molecularly imprinted ratiometric fluorescence sensor based on dual emission nanoparticles.

    PubMed

    Lu, Hongzhi; Xu, Shoufang

    2017-06-15

    Construction of ratiometric fluorescent probe often involved in tedious multistep preparation or complicated coupling or chemical modification process. The emergence of dual emission fluorescent nanoparticles would simplify the construction process and avoids the tedious chemical coupling. Herein, we reported a facile strategy to prepare ratiometric fluorescence molecularly imprinted sensor based on dual emission nanoparticles (d-NPs) which comprised of carbon dots and gold nanoclusters for detection of Bisphenol A (BPA). D-NPs emission at 460nm and 580nm were first prepared by seed growth co-microwave method using gold nanoparticles as seeds and glucose as precursor for carbon dots. When they were applied to propose ratiometric fluorescence molecularly imprinted sensor, the preparation process was simplified, and the sensitivity of sensor was improved with detection limit of 29nM, and visualizing BPA was feasible based on the distinguish fluorescence color change. The feasibility of the developed method in real samples was successfully evaluated through the analysis of BPA in water samples with satisfactory recoveries of 95.9-98.9% and recoveries ranging from 92.6% to 98.6% in canned food samples. When detection BPA in positive feeding bottles, the results agree well with those obtained by accredited method. The developed method proposed in this work to prepare ratiometric fluorescence molecularly imprinted sensor based on dual emission nanoparticles proved to be a convenient, reliable and practical way to prepared high sensitive and selective fluorescence sensors. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Compliance and stress sensitivity of spur gear teeth

    NASA Technical Reports Server (NTRS)

    Cornell, R. W.

    1983-01-01

    The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.

  14. Effect of signal intensity and camera quantization on laser speckle contrast analysis

    PubMed Central

    Song, Lipei; Elson, Daniel S.

    2012-01-01

    Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650

  15. Computation and analysis of backward ray-tracing in aero-optics flow fields.

    PubMed

    Xu, Liang; Xue, Deting; Lv, Xiaoyi

    2018-01-08

    A backward ray-tracing method is proposed for aero-optics simulation. Different from forward tracing, the backward tracing direction is from the internal sensor to the distant target. Along this direction, the tracing in turn goes through the internal gas region, the aero-optics flow field, and the freestream. The coordinate value, the density, and the refractive index are calculated at each tracing step. A stopping criterion is developed to ensure the tracing stops at the outer edge of the aero-optics flow field. As a demonstration, the analysis is carried out for a typical blunt nosed vehicle. The backward tracing method and stopping criterion greatly simplify the ray-tracing computations in the aero-optics flow field, and they can be extended to our active laser illumination aero-optics study because of the reciprocity principle.

  16. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  17. Analysis of internal ablation for the thermal control of aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Camberos, Jose A.; Roberts, Leonard

    1989-01-01

    A new method of thermal protection for transatmospheric vehicles is introduced. The method involves the combination of radiation, ablation and transpiration cooling. By placing an ablating material behind a fixed-shape, porous outer shield, the effectiveness of transpiration cooling is made possible while retaining the simplicity of a passive mechanism. A simplified one-dimensional approach is used to derive the governing equations. Reduction of these equations to non-dimensional form yields two parameters which characterize the thermal protection effectiveness of the shield and ablator combination for a given trajectory. The non-dimensional equations are solved numerically for a sample trajectory corresponding to glide re-entry. Four typical ablators are tested and compared with results obtained by using the thermal properties of water. For the present level of analysis, the numerical computations adequately support the analytical model.

  18. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  19. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  20. Analysis Model and Numerical Simulation of Thermoelectric Response of CFRP Composites

    NASA Astrophysics Data System (ADS)

    Lin, Yueguo

    2018-05-01

    An electric current generates Joule heating, and under steady state conditions, a sample exhibits a balance between the strength dissipated by the Joule effect and the heat exchange with the environment by radiation and convection. In the present paper, theoretical model, numerical FEM and experimental methods have been used to analyze the radiation and free convection properties in CFRP composite samples heated by an electric current. The materials employed in these samples have applications in many aeronautic devices. This study addresses two types of composite materials, UD [0]8 and QI [45/90/-45/0]S, which were prepared for thermoelectric experiments. A DC electric current (ranging from 1A to 8A) was injected through the specimen ends to find the coupling effect between the electric current and temperature. An FE model and simplified thermoelectric analysis model are presented in detail to represent the thermoelectric data. These are compared with the experimental results. All of the test equipments used to obtain the experimental data and the numerical simulations are characterized, and we find that the numerical simulations correspond well with the experiments. The temperature of the surface of the specimen is almost proportional to the electric current. The simplified analysis model was used to calculate the balance time of the temperature, which is consistent throughout all of the experimental investigations.

  1. SiMA: A simplified migration assay for analyzing neutrophil migration.

    PubMed

    Weckmann, Markus; Becker, Tim; Nissen, Gyde; Pech, Martin; Kopp, Matthias V

    2017-07-01

    In lung inflammation, neutrophils are the first leukocytes migrating to an inflammatory site, eliminating pathogens by multiple mechanisms. The term "migration" describes several stages of neutrophil movement to reach the site of inflammation, of which the passage of the interstitium and basal membrane of the airway are necessary to reach the site of bronchial inflammation. Currently, several methods exist (e.g., Boyden Chamber, under-agarose assay, or microfluidic systems) to assess neutrophil mobility. However, these methods do not allow for parameterization on single cell level, that is, the individual neutrophil pathway analysis is still considered challenging. This study sought to develop a simplified yet flexible method to monitor and quantify neutrophil chemotaxis by utilizing commercially available tissue culture hardware, simple video microscopic equipment and highly standardized tracking. A chemotaxis 3D µ-slide (IBIDI) was used with different chemoattractants [interleukin-8 (IL-8), fMLP, and Leukotriene B4 (LTB 4 )] to attract neutrophils in different matrices like Fibronectin (FN) or human placental matrix. Migration was recorded for 60 min using phase contrast microscopy with an EVOS ® FL Cell Imaging System. The images were normalized and texture based image segmentation was used to generate neutrophil trajectories. Based on these spatio-temporal information a comprehensive parameter set is extracted from each time series describing the neutrophils motility, including velocity and directness and neutrophil chemotaxis. To characterize the latter one, a sector analysis was employed enabling the quantification of the neutrophils response to the chemoattractant. Using this hard- and software framework we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications (Prednisolone). Additionally, a comparison of four asthmatic and three non-asthmatic patients gives a first hint to the capability of SiMA assay in the context of migration based diagnostics. Using SiMA we were able to identify typical migration profiles of the chemoattractants IL-8, fMLP, and LTB 4 , the effect of the matrices FN versus HEM as well as the response to different medications, that is, Prednisolone induced a change of direction of migrating neutrophils in FN but no such effect was observed in human placental matrix. In addition, neutrophils of asthmatic individuals showed an increased proportion of cells migrating toward the vehicle. With the SiMA platform we presented a simplified but yet flexible platform for cost-effective tracking and quantification of neutrophil migration. The introduced method is based on a simple microscopic video stage, standardized, commercially available, µ-fluidic migration chambers and automated image analysis, and track validation software. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  2. Data-Driven Nonlinear Subspace Modeling for Prediction and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Song, Heda; Wang, Hong

    Blast furnace (BF) in ironmaking is a nonlinear dynamic process with complicated physical-chemical reactions, where multi-phase and multi-field coupling and large time delay occur during its operation. In BF operation, the molten iron temperature (MIT) as well as Si, P and S contents of molten iron are the most essential molten iron quality (MIQ) indices, whose measurement, modeling and control have always been important issues in metallurgic engineering and automation field. This paper develops a novel data-driven nonlinear state space modeling for the prediction and control of multivariate MIQ indices by integrating hybrid modeling and control techniques. First, to improvemore » modeling efficiency, a data-driven hybrid method combining canonical correlation analysis and correlation analysis is proposed to identify the most influential controllable variables as the modeling inputs from multitudinous factors would affect the MIQ indices. Then, a Hammerstein model for the prediction of MIQ indices is established using the LS-SVM based nonlinear subspace identification method. Such a model is further simplified by using piecewise cubic Hermite interpolating polynomial method to fit the complex nonlinear kernel function. Compared to the original Hammerstein model, this simplified model can not only significantly reduce the computational complexity, but also has almost the same reliability and accuracy for a stable prediction of MIQ indices. Last, in order to verify the practicability of the developed model, it is applied in designing a genetic algorithm based nonlinear predictive controller for multivariate MIQ indices by directly taking the established model as a predictor. Industrial experiments show the advantages and effectiveness of the proposed approach.« less

  3. History of Science in the Physics Curriculum: A Directed Content Analysis of Historical Sources

    NASA Astrophysics Data System (ADS)

    Seker, Hayati; Guney, Burcu G.

    2012-05-01

    Although history of science is a potential resource for instructional materials, teachers do not have a tendency to use historical materials in their lessons. Studies showed that instructional materials should be adaptable and consistent with curriculum. This study purports to examine the alignment between history of science and the curriculum in the light of the facilitator model on the use of history of science in science teaching, and to expose possible difficulties in preparing historical materials. For this purpose, qualitative content analysis method was employed. Codes and themes were defined beforehand, with respect to levels and their sublevels of the model. The analysis revealed several problems with the alignment of historical sources for the physics curriculum: limited information about scientists' personal lives, the difficulty of linking with content knowledge, the lack of emphasis on scientific process in the physics curriculum, differences between chronology and sequence of topics, the lack of information about scientists' reasoning. Based on the findings of the analysis, it would be difficult to use original historical sources; educators were needed to simplify historical knowledge within a pedagogical perspective. There is a need for historical sources, like Harvard Case Histories in Experimental Science, since appropriate historical information to the curriculum objectives can only be obtained by simplifying complex information at the origin. The curriculum should leave opportunities for educators interested in history of science, even historical sources provides legitimate amount of information for every concepts in the curriculum.

  4. Testing of a simplified LED based vis/NIR system for rapid ripeness evaluation of white grape (Vitis vinifera L.) for Franciacorta wine.

    PubMed

    Giovenzana, Valentina; Civelli, Raffaele; Beghi, Roberto; Oberti, Roberto; Guidetti, Riccardo

    2015-11-01

    The aim of this work was to test a simplified optical prototype for a rapid estimation of the ripening parameters of white grape for Franciacorta wine directly in field. Spectral acquisition based on reflectance at four wavelengths (630, 690, 750 and 850 nm) was proposed. The integration of a simple processing algorithm in the microcontroller software would allow to visualize real time values of spectral reflectance. Non-destructive analyses were carried out on 95 grape bunches for a total of 475 berries. Samplings were performed weekly during the last ripening stages. Optical measurements were carried out both using the simplified system and a portable commercial vis/NIR spectrophotometer, as reference instrument for performance comparison. Chemometric analyses were performed in order to extract the maximum useful information from optical data. Principal component analysis (PCA) was performed for a preliminary evaluation of the data. Correlations between the optical data matrix and ripening parameters (total soluble solids content, SSC; titratable acidity, TA) were carried out using partial least square (PLS) regression for spectra and using multiple linear regression (MLR) for data from the simplified device. Classification analysis were also performed with the aim of discriminate ripe and unripe samples. PCA, MLR and classification analyses show the effectiveness of the simplified system in separating samples among different sampling dates and in discriminating ripe from unripe samples. Finally, simple equations for SSC and TA prediction were calculated. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Simplified Acute Physiology Score II as Predictor of Mortality in Intensive Care Units: A Decision Curve Analysis

    PubMed Central

    Allyn, Jérôme; Ferdynus, Cyril; Bohrer, Michel; Dalban, Cécile; Valance, Dorothée; Allou, Nicolas

    2016-01-01

    Background End-of-life decision-making in Intensive care Units (ICUs) is difficult. The main problems encountered are the lack of a reliable prediction score for death and the fact that the opinion of patients is rarely taken into consideration. The Decision Curve Analysis (DCA) is a recent method developed to evaluate the prediction models and which takes into account the wishes of patients (or surrogates) to expose themselves to the risk of obtaining a false result. Our objective was to evaluate the clinical usefulness, with DCA, of the Simplified Acute Physiology Score II (SAPS II) to predict ICU mortality. Methods We conducted a retrospective cohort study from January 2011 to September 2015, in a medical-surgical 23-bed ICU at University Hospital. Performances of the SAPS II, a modified SAPS II (without AGE), and age to predict ICU mortality, were measured by a Receiver Operating Characteristic (ROC) analysis and DCA. Results Among the 4.370 patients admitted, 23.3% died in the ICU. Mean (standard deviation) age was 56.8 (16.7) years, and median (first-third quartile) SAPS II was 48 (34–65). Areas under ROC curves were 0.828 (0.813–0.843) for SAPS II, 0.814 (0.798–0.829) for modified SAPS II and of 0.627 (0.608–0.646) for age. DCA showed a net benefit whatever the probability threshold, especially under 0.5. Conclusion DCA shows the benefits of the SAPS II to predict ICU mortality, especially when the probability threshold is low. Complementary studies are needed to define the exact role that the SAPS II can play in end-of-life decision-making in ICUs. PMID:27741304

  6. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  7. Diagnosis of cystic fibrosis with chloride meter (Sherwood M926S chloride analyzer®) and sweat test analysis system (CFΔ collection system®) compared to the Gibson Cooke method.

    PubMed

    Emiralioğlu, Nagehan; Özçelik, Uğur; Yalçın, Ebru; Doğru, Deniz; Kiper, Nural

    2016-01-01

    Sweat test with Gibson Cooke (GC) method is the diagnostic gold standard for cystic fibrosis (CF). Recently, alternative methods have been introduced to simplify both the collection and analysis of sweat samples. Our aim was to compare sweat chloride values obtained by GC method with other sweat test methods in patients diagnosed with CF and whose CF diagnosis had been ruled out. We wanted to determine if the other sweat test methods could reliably identify patients with CF and differentiate them from healthy subjects. Chloride concentration was measured with GC method, chloride meter and sweat test analysis system; also conductivity was determined with sweat test analysis system. Forty eight patients with CF and 82 patients without CF underwent the sweat test, showing median sweat chloride values 98.9 mEq/L with GC method, 101 mmol/L with chloride meter, 87.8 mmol/L with sweat test analysis system. In non-CF group, median sweat chloride values were 16.8 mEq/L with GC method, 10.5 mmol/L with chloride meter, and 15.6 mmol/L with sweat test analysis system. Median conductivity value was 107.3 mmol/L in CF group and 32.1 mmol/L in non CF group. There was a strong positive correlation between GC method and the other sweat test methods with a statistical significance (r=0.85) in all subjects. Sweat chloride concentration and conductivity by other sweat test methods highly correlate with the GC method. We think that the other sweat test equipments can be used as reliably as the classic GC method to diagnose or exclude CF.

  8. Stress Analysis of Beams with Shear Deformation of the Flanges

    NASA Technical Reports Server (NTRS)

    Kuhn, Paul

    1937-01-01

    This report discusses the fundamental action of shear deformation of the flanges on the basis of simplifying assumptions. The theory is developed to the point of giving analytical solutions for simple cases of beams and of skin-stringer panels under axial load. Strain-gage tests on a tension panel and on a beam corresponding to these simple cases are described and the results are compared with analytical results. For wing beams, an approximate method of applying the theory is given. As an alternative, the construction of a mechanical analyzer is advocated.

  9. FDTD modeling of thin impedance sheets

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond; Kunz, Karl

    1991-01-01

    Thin sheets of resistive or dielectric material are commonly encountered in radar cross section calculations. Analysis of such sheets is simplified by using sheet impedances. It is shown that sheet impedances can be modeled easily and accurately using Finite Difference Time Domain (FDTD) methods. These sheets are characterized by a discontinuity in the tangential magnetic field on either side of the sheet but no discontinuity in tangential electric field. This continuity, or single valued behavior of the electric field, allows the sheet current to be expressed in terms of an impedance multiplying this electric field.

  10. Flow of rarefied gases over two-dimensional bodies

    NASA Technical Reports Server (NTRS)

    Jeng, Duen-Ren; De Witt, Kenneth J.; Keith, Theo G., Jr.; Chung, Chan-Hong

    1989-01-01

    A kinetic-theory analysis is made of the flow of rarefied gases over two-dimensional bodies of arbitrary curvature. The Boltzmann equation simplified by a model collision integral is written in an arbitrary orthogonal curvilinear coordinate system, and solved by means of finite-difference approximation with the discrete ordinate method. A numerical code is developed which can be applied to any two-dimensional submerged body of arbitrary curvature for the flow regimes from free-molecular to slip at transonic Mach numbers. Predictions are made for the case of a right circular cylinder.

  11. Curves showing column strength of steel and duralumin tubing

    NASA Technical Reports Server (NTRS)

    Ross, Orrin E

    1929-01-01

    Given here are a set of column strength curves that are intended to simplify the method of determining the size of struts in an airplane structure when the load in the member is known. The curves will also simplify the checking of the strength of a strut if the size and length are known. With these curves, no computations are necessary, as in the case of the old-fashioned method of strut design. The process is so simple that draftsmen or others who are not entirely familiar with mechanics can check the strength of a strut without much danger of error.

  12. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  13. Simplified formulae for the estimation of offshore wind turbines clutter on marine radars.

    PubMed

    Grande, Olatz; Cañizo, Josune; Angulo, Itziar; Jenn, David; Danoon, Laith R; Guerra, David; de la Vega, David

    2014-01-01

    The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.

  14. [Vision-astigmatometer and methods of its use].

    PubMed

    Dashevskiĭ, A I; Kirrilov, Iu A

    1991-01-01

    A combination of astigmatic figures with black strips in different directions every 45 degrees and of two mutually perpendicular figures combined with an angle on a rotating disk on the front side of the astigmatometer and a combination of an angle and visometric cross of Landolt's optotypes on its back side with the similar disk, and a table of optotypes on the same side is suggested, that was tried in clinic. The directions of optotype ring ruptures are situated in 8 meridians. The front side of the astigmatometer shows a scheme for vector analysis of lenticular astigmatism. The method employed by the authors simplifies and accelerates the investigation, making unnecessary clouding and use of cross cylinders.

  15. Simplified Formulae for the Estimation of Offshore Wind Turbines Clutter on Marine Radars

    PubMed Central

    Grande, Olatz; Cañizo, Josune; Jenn, David; Danoon, Laith R.; Guerra, David

    2014-01-01

    The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario. PMID:24782682

  16. A Simplified Method of Elastic-Stability Analysis for Thin Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Batdorf, S B

    1947-01-01

    This paper develops a new method for determining the buckling stresses of cylindrical shells under various loading conditions. In part I, the equation for the equilibrium of cylindrical shells introduced by Donnell in NACA report no. 479 to find the critical stresses of cylinders in torsion is applied to find critical stresses for cylinders with simply supported edges under other loading conditions. In part II, a modified form of Donnell's equation for the equilibrium of thin cylindrical shells is derived which is equivalent to Donnell's equation but has certain advantages in physical interpretation and in ease of solution, particularly in the case of shells having clamped edges. The question of implicit boundary conditions is also considered.

  17. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  18. Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.

    PubMed Central

    Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W

    1994-01-01

    Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184

  19. [Influence of trabecular microstructure modeling on finite element analysis of dental implant].

    PubMed

    Shen, M J; Wang, G G; Zhu, X H; Ding, X

    2016-09-01

    To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.

  20. Moessfit. A free Mössbauer fitting program

    NASA Astrophysics Data System (ADS)

    Kamusella, Sirko; Klauss, Hans-Henning

    2016-12-01

    A free data analysis program for Mössbauer spectroscopy was developed to solve commonly faced problems such as simultaneous fitting of multiple data sets, Maximum Entropy Method and a proper error estimation. The program is written in C++ using the Qt application framework and the Gnu Scientific Library. Moessfit makes use of multithreading to reasonably apply the multi core CPU capacities of modern PC. The whole fit is specified in a text input file issued to simplify work flow for the user and provide a simple start in the Mössbauer data analysis for beginners. However, the possibility to define arbitrary parameter dependencies and distributions as well as relaxation spectra makes Moessfit interesting for advanced user as well.

  1. Observations and analysis of self-similar branching topology in glacier networks

    USGS Publications Warehouse

    Bahr, D.B.; Peckham, S.D.

    1996-01-01

    Glaciers, like rivers, have a branching structure which can be characterized by topological trees or networks. Probability distributions of various topological quantities in the networks are shown to satisfy the criterion for self-similarity, a symmetry structure which might be used to simplify future models of glacier dynamics. Two analytical methods of describing river networks, Shreve's random topology model and deterministic self-similar trees, are applied to the six glaciers of south central Alaska studied in this analysis. Self-similar trees capture the topological behavior observed for all of the glaciers, and most of the networks are also reasonably approximated by Shreve's theory. Copyright 1996 by the American Geophysical Union.

  2. Coach simplified structure modeling and optimization study based on the PBM method

    NASA Astrophysics Data System (ADS)

    Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian

    2016-09-01

    For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the Pareto solution set is acquired, and the selection strategy of the final solution is discussed. The case study demonstrates that the mechanical performances of the structure can be well-modeled and simulated by PBM beam. Because of the merits of fewer parameters and convenience of use, this method is suitable to be applied in the concept stage. Another merit is that the optimization results are the requirements for the mechanical performance of the beam section instead of those of the shape and dimensions, bringing flexibility to the succeeding design.

  3. A simplified approach to the pooled analysis of calibration of clinical prediction rules for systematic reviews of validation studies

    PubMed Central

    Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom

    2015-01-01

    Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829

  4. Simplified Asset Indices to Measure Wealth and Equity in Health Programs: A Reliability and Validity Analysis Using Survey Data From 16 Countries.

    PubMed

    Chakraborty, Nirali M; Fry, Kenzo; Behl, Rasika; Longfield, Kim

    2016-03-01

    Social franchising programs in low- and middle-income countries have tried using the standard wealth index, based on the Demographic and Health Survey (DHS) questionnaire, in client exit interviews to assess clients' relative wealth compared with the national wealth distribution to ensure equity in service delivery. The large number of survey questions required to capture the wealth index variables have proved cumbersome for programs. Using an adaptation of the Delphi method, we developed shortened wealth indices and in February 2015 consulted 15 stakeholders in equity measurement. Together, we selected the best of 5 alternative indices, accompanied by 2 measures of agreement (percent agreement and Cohen's kappa statistic) comparing wealth quintile assignment in the new indices to the full DHS index. The panel agreed that reducing the number of assets was more important than standardization across countries because a short index would provide strong indication of client wealth and be easier to collect and use in the field. Additionally, the panel agreed that the simplified index should be highly correlated with the DHS for each country (kappa ≥ 0.75) for both national and urban-specific samples. We then revised indices for 16 countries and selected the minimum number of questions and question options required to achieve a kappa statistic ≥ 0.75 for both national and urban populations. After combining the 5 wealth quintiles into 3 groups, which the expert panel deemed more programmatically meaningful, reliability between the standard DHS wealth index and each of 3 simplified indices was high (median kappa = 0.81, 086, and 0.77, respectively, for index B that included only the common questions from the DHS VI questionnaire, index D that included the common questions plus country-specific questions, and index E that found the shortest list of common and country-specific questions that met the minimum reliability criteria of kappa ≥ 0.75). Index E was the simplified index of choice because it was reliable in national and urban contexts while requiring the fewest number of survey questions-6 to 18 per country compared with 25 to 47 in the original DHS wealth index (a 66% average reduction). Social franchise clinics and other types of service delivery programs that want to assess client wealth in relation to a national or urban population can do so with high reliability using a short questionnaire. Future uses of the simplified asset questionnaire include a mobile application for rapid data collection and analysis. © Chakraborty et al.

  5. Stability indicating simplified HPLC method for simultaneous analysis of resveratrol and quercetin in nanoparticles and human plasma.

    PubMed

    Kumar, Sandeep; Lather, Viney; Pandita, Deepti

    2016-04-15

    Resveratrol and quercetin are well-known polyphenolic compounds present in common foods, which have demonstrated enormous potential in the treatment of a wide variety of diseases. Owing to their exciting synergistic potential and combination delivery applications, we developed a simple and rapid RP-HPLC method based on isosbestic point detection. The separation was carried out on phenomenex Synergi 4μ Hydro-RP 80A column using methanol: acetonitrile (ACN): 0.1% phosphoric acid (60:10:30) as mobile phase. The method was able to quantify nanograms of analytes simultaneously on a single wavelength (269 nm), making it highly sensitive, rapid as well as economical. Additionally, forced degradation studies of resveratrol and quercetin were established and the method's applicability was evaluated on PLGA nanoparticles and human plasma. The analytes peaks were found to be well resolved in the presence of degradation products and excipients. The simplicity of the developed method potentializes its suitability for routine in vitro and in vivo analysis of resveratrol and quercetin. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. TCP Packet Trace Analysis. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Shepard, Timothy J.

    1991-01-01

    Examination of a trace of packets collected from the network is often the only method available for diagnosing protocol performance problems in computer networks. This thesis explores the use of packet traces to diagnose performance problems of the transport protocol TCP. Unfortunately, manual examination of these traces can be so tedious that effective analysis is not possible. The primary contribution of this thesis is a graphical method of displaying the packet trace which greatly reduce, the tediousness of examining a packet trace. The graphical method is demonstrated by the examination of some packet traces of typical TCP connections. The performance of two different implementations of TCP sending data across a particular network path is compared. Traces many thousands of packets long are used to demonstrate how effectively the graphical method simplifies examination of long complicated traces. In the comparison of the two TCP implementations, the burstiness of the TCP transmitter appeared to be related to the achieved throughput. A method of quantifying this burstiness is presented and its possible relevance to understanding the performance of TCP is discussed.

  7. Evaluation of use of MPAD trajectory tape and number of orbit points for orbiter mission thermal predictions

    NASA Technical Reports Server (NTRS)

    Vogt, R. A.

    1979-01-01

    The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.

  8. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  9. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Scarless assembly of unphosphorylated DNA fragments with a simplified DATEL method.

    PubMed

    Ding, Wenwen; Weng, Huanjiao; Jin, Peng; Du, Guocheng; Chen, Jian; Kang, Zhen

    2017-05-04

    Efficient assembly of multiple DNA fragments is a pivotal technology for synthetic biology. A scarless and sequence-independent DNA assembly method (DATEL) using thermal exonucleases has been developed recently. Here, we present a simplified DATEL (sDATEL) for efficient assembly of unphosphorylated DNA fragments with low cost. The sDATEL method is only dependent on Taq DNA polymerase and Taq DNA ligase. After optimizing the committed parameters of the reaction system such as pH and the concentration of Mg 2+ and NAD+, the assembly efficiency was increased by 32-fold. To further improve the assembly capacity, the number of thermal cycles was optimized, resulting in successful assembly 4 unphosphorylated DNA fragments with an accuracy of 75%. sDATEL could be a desirable method for routine manual and automated assembly.

  11. Simplified Automated Image Analysis for Detection and Phenotyping of Mycobacterium tuberculosis on Porous Supports by Monitoring Growing Microcolonies

    PubMed Central

    den Hertog, Alice L.; Visser, Dennis W.; Ingham, Colin J.; Fey, Frank H. A. G.; Klatser, Paul R.; Anthony, Richard M.

    2010-01-01

    Background Even with the advent of nucleic acid (NA) amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS), as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. Methods Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO) supports. Repeated imaging during colony growth greatly simplifies “computer vision” and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. Significance Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation. PMID:20544033

  12. Direct Surface and Droplet Microsampling for Electrospray Ionization Mass Spectrometry Analysis with an Integrated Dual-Probe Microfluidic Chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Cong-Min; Zhu, Ying; Jin, Di-Qiong

    Ambient mass spectrometry (MS) has revolutionized the way of MS analysis and broadened its application in various fields. This paper describes the use of microfluidic techniques to simplify the setup and improve the functions of ambient MS by integrating the sampling probe, electrospray emitter probe, and online mixer on a single glass microchip. Two types of sampling probes, including a parallel-channel probe and a U-shaped channel probe, were designed for dryspot and liquid-phase droplet samples, respectively. We demonstrated that the microfabrication techniques not only enhanced the capability of ambient MS methods in analysis of dry-spot samples on various surfaces, butmore » also enabled new applications in the analysis of nanoliter-scale chemical reactions in an array of droplets. The versatility of the microchip-based ambient MS method was demonstrated in multiple different applications including evaluation of residual pesticide on fruit surfaces, sensitive analysis of low-ionizable analytes using postsampling derivatization, and high-throughput screening of Ugi-type multicomponent reactions.« less

  13. Inverse supercritical fluid extraction as a sample preparation method for the analysis of the nanoparticle content in sunscreen agents.

    PubMed

    Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Vries, Tjerk; Portugal-Cohen, Meital; Antonio, Diana C; Cascio, Claudia; Calzolai, Luigi; Gilliland, Douglas; de Mello, Andrew

    2016-04-01

    We demonstrate the use of inverse supercritical carbon dioxide (scCO2) extraction as a novel method of sample preparation for the analysis of complex nanoparticle-containing samples, in our case a model sunscreen agent with titanium dioxide nanoparticles. The sample was prepared for analysis in a simplified process using a lab scale supercritical fluid extraction system. The residual material was easily dispersed in an aqueous solution and analyzed by Asymmetrical Flow Field-Flow Fractionation (AF4) hyphenated with UV- and Multi-Angle Light Scattering detection. The obtained results allowed an unambiguous determination of the presence of nanoparticles within the sample, with almost no background from the matrix itself, and showed that the size distribution of the nanoparticles is essentially maintained. These results are especially relevant in view of recently introduced regulatory requirements concerning the labeling of nanoparticle-containing products. The novel sample preparation method is potentially applicable to commercial sunscreens or other emulsion-based cosmetic products and has important ecological advantages over currently used sample preparation techniques involving organic solvents. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Isothermal Amplification Methods for the Detection of Nucleic Acids in Microfluidic Devices

    PubMed Central

    Zanoli, Laura Maria; Spoto, Giuseppe

    2012-01-01

    Diagnostic tools for biomolecular detection need to fulfill specific requirements in terms of sensitivity, selectivity and high-throughput in order to widen their applicability and to minimize the cost of the assay. The nucleic acid amplification is a key step in DNA detection assays. It contributes to improving the assay sensitivity by enabling the detection of a limited number of target molecules. The use of microfluidic devices to miniaturize amplification protocols reduces the required sample volume and the analysis times and offers new possibilities for the process automation and integration in one single device. The vast majority of miniaturized systems for nucleic acid analysis exploit the polymerase chain reaction (PCR) amplification method, which requires repeated cycles of three or two temperature-dependent steps during the amplification of the nucleic acid target sequence. In contrast, low temperature isothermal amplification methods have no need for thermal cycling thus requiring simplified microfluidic device features. Here, the use of miniaturized analysis systems using isothermal amplification reactions for the nucleic acid amplification will be discussed. PMID:25587397

  15. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  16. Weighted minimum-norm source estimation of magnetoencephalography utilizing the temporal information of the measured data

    NASA Astrophysics Data System (ADS)

    Iwaki, Sunao; Ueno, Shoogo

    1998-06-01

    The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.

  17. Simplified method for preparation of concentrated exoproteins produced by Staphylococcus aureus grown on surface of cellophane bag containing liquid medium.

    PubMed

    Ikigai, H; Seki, K; Nishihara, S; Masuda, S

    1988-01-01

    A simplified method for preparation of concentrated exoproteins including protein A and alpha-toxin produced by Staphylococcus aureus was successfully devised. The concentrated proteins were obtained by cultivating S. aureus organisms on the surface of a liquid medium-containing cellophane bag enclosed in a sterilized glass flask. With the same amount of medium, the total amount of proteins obtained by the method presented here was identical with that obtained by conventional liquid culture. The concentration of proteins obtained by the method, however, was high enough to observe their distinct bands stained on polyacrylamide gel electrophoresis. This method was considered quite useful not only for large-scale cultivation for the purification of staphylococcal proteins but also for small-scale study using the proteins. The precise description of the method was presented and its possible usefulness was discussed.

  18. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  19. Simplified Analysis of Airspike Heat Flux Into Lightcraft Thermal Management System

    NASA Astrophysics Data System (ADS)

    Head, Dean R.; Seo, Junghwa; Cassenti, Brice N.; Myrabo, Leik N.

    2005-04-01

    An approximate method is presented for estimating the airspike heat flux into a 9-meter diameter lightcraft, integrated over its flight to low Earth orbit. The super-pressure lightcraft's exotic twin-hull, sandwich structure is assumed to be fabricated from SiC/SiC thin-film ceramic matrix composites of semiconductor grade purity, giving superior structural properties while being transparent to 35-GHz microwave radiation. The vehicle's MHD slipstream accelerator engine is energized by an annular microwave power beam — converted on-board into DC electric power by two concentric, water-cooled microwave rectenna arrays. The vehicle's airspike is created by a central 3-m diameter laser beam that sustains a laser-supported detonation wave at a distance of 10-m ahead of the craft; the LSD wave propagates up the beam with a velocity that matches the lightcraft's flight speed. The simplified analysis, which is based on aerodynamic heating during re-entry, shows that helium flowing at a velocity of 10 m/s through the lightcraft's double-hull is sufficient to keep the outer, 0.13-mm thick SiC skin safely under its maximum service temperature. The interior helium pressurant that maintains the structural integrity of this exotic pressure-airship, increases in temperature by only 25 K during the flight to LEO.

  20. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  1. An improved protocol for harvesting Bacillus subtilis colony biofilms.

    PubMed

    Fuchs, Felix Matthias; Driks, Adam; Setlow, Peter; Moeller, Ralf

    2017-03-01

    Bacterial biofilms cause severe problems in medicine and industry due to the high resistance to disinfectants and environmental stress of organisms within biofilms. Addressing challenges caused by biofilms requires full understanding of the underlying mechanisms for bacterial resistance and survival in biofilms. However, such work is hampered by a relative lack of systems for biofilm cultivation that are practical and reproducible. To address this problem, we developed a readily applicable method to culture Bacillus subtilis biofilms on a membrane filter. The method results in biofilms with highly reproducible characteristics, and which can be readily analyzed by a variety of methods with little further manipulation. This biofilm preparation method simplifies routine generation of B. subtilis biofilms for molecular and cellular analysis, and could be applicable to other microbial systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Lagrangian methods in the analysis of nonlinear wave interactions in plasma

    NASA Technical Reports Server (NTRS)

    Galloway, J. J.

    1972-01-01

    An averaged-Lagrangian method is developed for obtaining the equations which describe the nonlinear interactions of the wave (oscillatory) and background (nonoscillatory) components which comprise a continuous medium. The method applies to monochromatic waves in any continuous medium that can be described by a Lagrangian density, but is demonstrated in the context of plasma physics. The theory is presented in a more general and unified form by way of a new averaged-Lagrangian formalism which simplifies the perturbation ordering procedure. Earlier theory is extended to deal with a medium distributed in velocity space and to account for the interaction of the background with the waves. The analytic steps are systematized, so as to maximize calculational efficiency. An assessment of the applicability and limitations of the method shows that it has some definite advantages over other approaches in efficiency and versatility.

  3. A simplified method of performance indicators development for epidemiological surveillance networks--application to the RESAPATH surveillance network.

    PubMed

    Sorbe, A; Chazel, M; Gay, E; Haenni, M; Madec, J-Y; Hendrikx, P

    2011-06-01

    Develop and calculate performance indicators allows to continuously follow the operation of an epidemiological surveillance network. This is an internal evaluation method, implemented by the coordinators in collaboration with all the actors of the network. Its purpose is to detect weak points in order to optimize management. A method for the development of performance indicators of epidemiological surveillance networks was developed in 2004 and was applied to several networks. Its implementation requires a thorough description of the network environment and all its activities to define priority indicators. Since this method is considered to be complex, our objective consisted in developing a simplified approach and applying it to an epidemiological surveillance network. We applied the initial method to a theoretical network model to obtain a list of generic indicators that can be adapted to any surveillance network. We obtained a list of 25 generic performance indicators, intended to be reformulated and described according to the specificities of each network. It was used to develop performance indicators for RESAPATH, an epidemiological surveillance network of antimicrobial resistance in pathogenic bacteria of animal origin in France. This application allowed us to validate the simplified method, its value in terms of practical implementation, and its level of user acceptance. Its ease of use and speed of application compared to the initial method argue in favor of its use on broader scale. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  4. Hepatic iron overload in the portal tract predicts poor survival in hepatocellular carcinoma after curative resection.

    PubMed

    Chung, Jung Wha; Shin, Eun; Kim, Haeryoung; Han, Ho-Seong; Cho, Jai Young; Choi, Young Rok; Hong, Sukho; Jang, Eun Sun; Kim, Jin-Wook; Jeong, Sook-Hyang

    2018-05-01

    Hepatic iron overload is associated with liver injury and hepatocarcinogenesis; however, it has not been evaluated in patients with hepatocellular carcinoma (HCC) in Asia. The aim of this study was to clarify the degree and distribution of intrahepatic iron deposition, and their effects on the survival of HCC patients. Intrahepatic iron deposition was examined using non-tumorous liver tissues from 204 HCC patients after curative resection, and they were scored by 2 semi-quantitative methods: simplified Scheuer's and modified Deugnier's methods. For the Scheuer's method, iron deposition in hepatocytes and Kupffer cells was separately evaluated, while for the modified Deugnier's method, hepatocyte iron score (HIS), sinusoidal iron score (SIS) and portal iron score (PIS) were systematically evaluated, and the corrected total iron score (cTIS) was calculated by multiplying the sum (TIS) of the HIS, SIS, and PIS by the coefficient. The overall prevalence of hepatic iron was 40.7% with the simplified Scheuer's method and 45.1% with the modified Deugnier's method with a mean cTIS score of 2.46. During a median follow-up of 67 months, the cTIS was not associated with overall survival. However, a positive PIS was significantly associated with a lower 5-year overall survival rate (50.0%) compared with a negative PIS (73.7%, P = .006). In the multivariate analysis, a positive PIS was an independent factor for overall mortality (hazard ratio, 2.310; 95% confidence interval, 1.181-4.517). Intrahepatic iron deposition was common, and iron overload in the portal tract indicated poor survival in curatively resected HCC patients. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods

    NASA Astrophysics Data System (ADS)

    Lai, Bo-Lun; Sheu, Rong-Jiun

    2017-09-01

    Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.

  6. A simplified digital lock-in amplifier for the scanning grating spectrometer.

    PubMed

    Wang, Jingru; Wang, Zhihong; Ji, Xufei; Liu, Jie; Liu, Guangda

    2017-02-01

    For the common measurement and control system of a scanning grating spectrometer, the use of an analog lock-in amplifier requires complex circuitry and sophisticated debugging, whereas the use of a digital lock-in amplifier places a high demand on the calculation capability and storage space. In this paper, a simplified digital lock-in amplifier based on averaging the absolute values within a complete period is presented and applied to a scanning grating spectrometer. The simplified digital lock-in amplifier was implemented on a low-cost microcontroller without multipliers, and got rid of the reference signal and specific configuration of the sampling frequency. Two positive zero-crossing detections were used to lock the phase of the measured signal. However, measurement method errors were introduced by the following factors: frequency fluctuation, sampling interval, and integer restriction of the sampling number. The theoretical calculation and experimental results of the signal-to-noise ratio of the proposed measurement method were 2055 and 2403, respectively.

  7. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  8. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  9. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    PubMed

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  10. Petri net-based method for the analysis of the dynamics of signal propagation in signaling pathways.

    PubMed

    Hardy, Simon; Robillard, Pierre N

    2008-01-15

    Cellular signaling networks are dynamic systems that propagate and process information, and, ultimately, cause phenotypical responses. Understanding the circuitry of the information flow in cells is one of the keys to understanding complex cellular processes. The development of computational quantitative models is a promising avenue for attaining this goal. Not only does the analysis of the simulation data based on the concentration variations of biological compounds yields information about systemic state changes, but it is also very helpful for obtaining information about the dynamics of signal propagation. This article introduces a new method for analyzing the dynamics of signal propagation in signaling pathways using Petri net theory. The method is demonstrated with the Ca(2+)/calmodulin-dependent protein kinase II (CaMKII) regulation network. The results constitute temporal information about signal propagation in the network, a simplified graphical representation of the network and of the signal propagation dynamics and a characterization of some signaling routes as regulation motifs.

  11. MDAS: an integrated system for metabonomic data analysis.

    PubMed

    Liu, Juan; Li, Bo; Xiong, Jiang-Hui

    2009-03-01

    Metabonomics, the latest 'omics' research field, shows great promise as a tool in biomarker discovery, drug efficacy and toxicity analysis, disease diagnosis and prognosis. One of the major challenges now facing researchers is how to process this data to yield useful information about a biological system, e.g., the mechanism of diseases. Traditional methods employed in metabonomic data analysis use multivariate analysis methods developed independently in chemometrics research. Additionally, with the development of machine learning approaches, some methods such as SVMs also show promise for use in metabonomic data analysis. Aside from the application of general multivariate analysis and machine learning methods to this problem, there is also a need for an integrated tool customized for metabonomic data analysis which can be easily used by biologists to reveal interesting patterns in metabonomic data.In this paper, we present a novel software tool MDAS (Metabonomic Data Analysis System) for metabonomic data analysis which integrates traditional chemometrics methods and newly introduced machine learning approaches. MDAS contains a suite of functional models for metabonomic data analysis and optimizes the flow of data analysis. Several file formats can be accepted as input. The input data can be optionally preprocessed and can then be processed with operations such as feature analysis and dimensionality reduction. The data with reduced dimensionalities can be used for training or testing through machine learning models. The system supplies proper visualization for data preprocessing, feature analysis, and classification which can be a powerful function for users to extract knowledge from the data. MDAS is an integrated platform for metabonomic data analysis, which transforms a complex analysis procedure into a more formalized and simplified one. The software package can be obtained from the authors.

  12. Rapid Assessment of Genotoxicity by Flow Cytometric Detection of Cell Cycle Alterations.

    PubMed

    Bihari, Nevenka

    2017-01-01

    Flow cytometry is a convenient method for the determination of genotoxic effects of environmental pollution and can reveal genotoxic compounds in unknown environmental mixtures. It is especially suitable for the analyses of large numbers of samples during monitoring programs. The speed of detection is one of the advantages of this technique which permits the acquisition of 10 4 -10 5 cells per sample in 5 min. This method can rapidly detect cell cycle alterations resulting from DNA damage. The outcome of such an analysis is a diagram of DNA content across the cell cycle which indicates cell proliferation, G 2 arrests, G 1 delays, apoptosis, and ploidy.Here, we present the flow cytometric procedure for rapid assessment of genotoxicity via detection of cell cycle alterations. The described protocol simplifies the analysis of genotoxic effects in marine environments and is suitable for monitoring purposes. It uses marine mussel cells in the analysis and can be adapted to investigations on a broad range of marine invertebrates.

  13. A new simplified method for measuring the permeability characteristics of highly porous media

    NASA Astrophysics Data System (ADS)

    Qin, Yinghong; Zhang, Mingyi; Mei, Guoxiong

    2018-07-01

    Fluid flow through highly porous media is important in a variety of science and technology fields, including hydrology, chemical engineering, convections in porous media, and others. While many methods have been available to measure the permeability of tight solid materials, such as concrete and rock, the technique for measuring the permeability of highly porous media is limited (such as gravel, aggregated soils, and crushed rock). This study proposes a new simplified method for measuring the permeability of highly porous media with a permeability of 10-8-10-4 m2, using a Venturi tube to gauge the gas flowing rate through the sample. Using crushed rocks and glass beads as the test media, we measure the permeability and inertial resistance factor of six types of single-size aggregate columns. We compare the testing results with the published permeability and inertial resistance factor of crushed rock and of glass beads. We found that in a log-log graph, the permeability and inertial resistance factor of a single-size aggregate heap increases linearly with the mean diameter of the aggregate. We speculate that the proposed simplified method is suitable to efficiently test the permeability and inertial resistance factor of a variety of porous media with an intrinsic permeability of 10-8-10-4 m2.

  14. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    PubMed

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. New Approaches for Calculating Moran’s Index of Spatial Autocorrelation

    PubMed Central

    Chen, Yanguang

    2013-01-01

    Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran’s index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran’s index. Moran’s scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran’s index and Geary’s coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran’s index and Geary’s coefficient will be clarified and defined. One of theoretical findings is that Moran’s index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation. PMID:23874592

  16. Unique Outcomes of Internal Heat Generation and Thermal Deposition on Viscous Dissipative Transport of Viscoplastic Fluid over a Riga-Plate

    NASA Astrophysics Data System (ADS)

    Iqbal, Z.; Azhar, Ehtsham; Mehmood, Zaffar; Maraj, E. N.

    2018-01-01

    Boundary layer stagnation point flow of Casson fluid over a Riga plate of variable thickness is investigated in present article. Riga plate is an electromagnetic actuator consists of enduring magnets and gyrated aligned array of alternating electrodes mounted on a plane surface. Physical problem is modeled and simplified under appropriate transformations. Effects of thermal radiation and viscous dissipation are incorporated. These differential equations are solved by Keller Box Scheme using MATLAB. Comparison is given with shooting techniques along with Range-Kutta Fehlberg method of order 5. Graphical and tabulated analysis is drawn. The results reveal that Eckert number, radiation and fluid parameters enhance temperature whereas they contribute in lowering rate of heat transfer. The numerical outcomes of present analysis depicts that Keller Box Method is capable and consistent to solve proposed nonlinear problem with high accuracy.

  17. Aggregative Learning Method and Its Application for Communication Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Akhmetov, Dauren F.; Kotaki, Minoru

    2007-12-01

    In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less

  19. A Simplified Method for Sampling and Analysis of High Volume Surface Water for Organic Contaminants Using XAD-2

    USGS Publications Warehouse

    Datta, S.; Do, L.V.; Young, T.M.

    2004-01-01

    A simple compressed-gas driven system for field processing and extracting water for subsequent analyses of hydrophobic organic compounds is presented. The pumping device is a pneumatically driven pump and filtration system that can easily clarify at 4L/min. The extraction device uses compressed gas to drive filtered water through two parallel XAD-2 resin columns, at about 200 mL/min. No batteries or inverters are required for water collection or processing. Solvent extractions were performed directly in the XAD-2 glass columns. Final extracts are cleaned-up on Florisil cartridges without fractionation and contaminants analyzed by GC-MS. Method detection limits (MDLs) and recoveries for dissolved organic contaminants, polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs) and pesticides are reported along with results of surface water analysis for the San Francisco Bay, CA.

  20. Study on fluid-structure interaction in liquid oxygen feeding pipe systems using finite volume method

    NASA Astrophysics Data System (ADS)

    Wei, Xin; Sun, Bing

    2011-10-01

    The fluid-structure interaction may occur in space launch vehicles, which would lead to bad performance of vehicles, damage equipments on vehicles, or even affect astronauts' health. In this paper, analysis on dynamic behavior of liquid oxygen (LOX) feeding pipe system in a large scale launch vehicle is performed, with the effect of fluid-structure interaction (FSI) taken into consideration. The pipe system is simplified as a planar FSI model with Poisson coupling and junction coupling. Numerical tests on pipes between the tank and the pump are solved by the finite volume method. Results show that restrictions weaken the interaction between axial and lateral vibrations. The reasonable results regarding frequencies and modes indicate that the FSI affects substantially the dynamic analysis, and thus highlight the usefulness of the proposed model. This study would provide a reference to the pipe test, as well as facilitate further studies on oscillation suppression.

  1. Research on the self-absorption corrections for PGNAA of large samples

    NASA Astrophysics Data System (ADS)

    Yang, Jian-Bo; Liu, Zhi; Chang, Kang; Li, Rui

    2017-02-01

    When a large sample is analysed with the prompt gamma neutron activation analysis (PGNAA) neutron self-shielding and gamma self-absorption affect the accuracy, the correction method for the detection efficiency of the relative H of each element in a large sample is described. The influences of the thickness and density of the cement samples on the H detection efficiency, as well as the impurities Fe2O3 and SiO2 on the prompt γ ray yield for each element in the cement samples, were studied. The phase functions for Ca, Fe, and Si on H with changes in sample thickness and density were provided to avoid complicated procedures for preparing the corresponding density or thickness scale for measuring samples under each density or thickness value and to present a simplified method for the measurement efficiency scale for prompt-gamma neutron activation analysis.

  2. A multi-fidelity framework for physics based rotor blade simulation and optimization

    NASA Astrophysics Data System (ADS)

    Collins, Kyle Brian

    New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. Rotor Designers in industry need methods that allow them to use the most accurate simulation tools available to search for these optimal designs. Computer based rotor analysis and optimization have been advanced by the development of industry standard codes known as "comprehensive" rotorcraft analysis tools. These tools typically use table look-up aerodynamics, simplified inflow models and perform aeroelastic analysis using Computational Structural Dynamics (CSD). Due to the simplified aerodynamics, most design studies are performed varying structural related design variables like sectional mass and stiffness. The optimization of shape related variables in forward flight using these tools is complicated and results are viewed with skepticism because rotor blade loads are not accurately predicted. The most accurate methods of rotor simulation utilize Computational Fluid Dynamics (CFD) but have historically been considered too computationally intensive to be used in computer based optimization, where numerous simulations are required. An approach is needed where high fidelity CFD rotor analysis can be utilized in a shape variable optimization problem with multiple objectives. Any approach should be capable of working in forward flight in addition to hover. An alternative is proposed and founded on the idea that efficient hybrid CFD methods of rotor analysis are ready to be used in preliminary design. In addition, the proposed approach recognizes the usefulness of lower fidelity physics based analysis and surrogate modeling. Together, they are used with high fidelity analysis in an intelligent process of surrogate model building of parameters in the high fidelity domain. Closing the loop between high and low fidelity analysis is a key aspect of the proposed approach. This is done by using information from higher fidelity analysis to improve predictions made with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of building approximate models of high fidelity parameters has been developed. The method uses a combination of low and high fidelity results and combines Design of Experiments, statistical effects analysis, and aspects of approximation model management. And third, the determination of rotor blade shape variables through optimization using CFD based analysis in forward flight has been performed. This was done using the high fidelity CFD/CSD/AA framework and method mentioned above. While the low and high fidelity predictions methods used in the work still have inaccuracies that can affect the absolute levels of the results, a framework has been successfully developed and demonstrated that allows for an efficient process to improve rotor blade designs in terms of a selected choice of objective function(s). Using engineering judgment, this methodology could be applied today to investigate opportunities to improve existing designs. With improvements in the low and high fidelity prediction components that will certainly occur, this framework could become a powerful tool for future rotorcraft design work. (Abstract shortened by UMI.)

  3. FY16 Status Report on Development of Integrated EPP and SMT Design Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jetter, R. I.; Sham, T. -L.; Wang, Y.

    2016-08-01

    The goal of the Elastic-Perfectly Plastic (EPP) combined integrated creep-fatigue damage evaluation approach is to incorporate a Simplified Model Test (SMT) data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The EPP methodology is based on the idea that creep damage and strain accumulation can be bounded by a properly chosen “pseudo” yield strength used in an elastic-perfectly plastic analysis, thus avoiding the need for stress classification. The originalmore » SMT approach is based on the use of elastic analysis. The experimental data, cycles to failure, is correlated using the elastically calculated strain range in the test specimen and the corresponding component strain is also calculated elastically. The advantage of this approach is that it is no longer necessary to use the damage interaction, or D-diagram, because the damage due to the combined effects of creep and fatigue are accounted in the test data by means of a specimen that is designed to replicate or bound the stress and strain redistribution that occurs in actual components when loaded in the creep regime. The reference approach to combining the two methodologies and the corresponding uncertainties and validation plans are presented. Results from recent key feature tests are discussed to illustrate the applicability of the EPP methodology and the behavior of materials at elevated temperature when undergoing stress and strain redistribution due to plasticity and creep.« less

  4. Improving biobank consent comprehension: a national randomized survey to assess the effect of a simplified form and review/retest intervention

    PubMed Central

    Beskow, Laura M.; Lin, Li; Dombeck, Carrie B.; Gao, Emily; Weinfurt, Kevin P.

    2017-01-01

    Purpose: To determine the individual and combined effects of a simplified form and a review/retest intervention on biobanking consent comprehension. Methods: We conducted a national online survey in which participants were randomized within four educational strata to review a simplified or traditional consent form. Participants then completed a comprehension quiz; for each item answered incorrectly, they reviewed the corresponding consent form section and answered another quiz item on that topic. Results: Consistent with our first hypothesis, comprehension among those who received the simplified form was not inferior to that among those who received the traditional form. Contrary to expectations, receipt of the simplified form did not result in significantly better comprehension compared with the traditional form among those in the lowest educational group. The review/retest procedure significantly improved quiz scores in every combination of consent form and education level. Although improved, comprehension remained a challenge in the lowest-education group. Higher quiz scores were significantly associated with willingness to participate. Conclusion: Ensuring consent comprehension remains a challenge, but simplified forms have virtues independent of their impact on understanding. A review/retest intervention may have a significant effect, but assessing comprehension raises complex questions about setting thresholds for understanding and consequences of not meeting them. Genet Med advance online publication 13 October 2016 PMID:27735922

  5. An Independent Evaluation of the FMEA/CIL Hazard Analysis Alternative Study

    NASA Technical Reports Server (NTRS)

    Ray, Paul S.

    1996-01-01

    The present instruments of safety and reliability risk control for a majority of the National Aeronautics and Space Administration (NASA) programs/projects consist of Failure Mode and Effects Analysis (FMEA), Hazard Analysis (HA), Critical Items List (CIL), and Hazard Report (HR). This extensive analytical approach was introduced in the early 1970's and was implemented for the Space Shuttle Program by NHB 5300.4 (1D-2. Since the Challenger accident in 1986, the process has been expanded considerably and resulted in introduction of similar and/or duplicated activities in the safety/reliability risk analysis. A study initiated in 1995, to search for an alternative to the current FMEA/CIL Hazard Analysis methodology generated a proposed method on April 30, 1996. The objective of this Summer Faculty Study was to participate in and conduct an independent evaluation of the proposed alternative to simplify the present safety and reliability risk control procedure.

  6. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database.

    PubMed

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-05-14

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.

  7. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database

    PubMed Central

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-01-01

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS’s solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method. PMID:29757983

  8. Methodology for determining major constituents of ayahuasca and their metabolites in blood.

    PubMed

    McIlhenny, Ethan H; Riba, Jordi; Barbanoj, Manel J; Strassman, Rick; Barker, Steven A

    2012-03-01

    There is an increasing interest in potential medical applications of ayahuasca, a South American psychotropic plant tea with a long cultural history of indigenous medical and religious use. Clinical research into ayahuasca will require specific, sensitive and comprehensive methods for the characterization and quantitation of these compounds and their metabolites in blood. A combination of two analytical techniques (high-performance liquid chromatography with ultraviolet and/or fluorescence detection and gas chromatography with nitrogen-phosphorus detection) has been used for the analysis of some of the constituents of ayahuasca in blood following its oral consumption. We report here a single methodology for the direct analysis of 14 of the major alkaloid components of ayahuasca, including several known and potential metabolites of N,N-dimethyltryptamine and the harmala alkaloids in blood. The method uses 96-well plate/protein precipitation/filtration for plasma samples, and analysis by HPLC-ion trap-ion trap-mass spectrometry using heated electrospray ionization to reduce matrix effects. The method expands the list of compounds capable of being monitored in blood following ayahuasca administration while providing a simplified approach to their analysis. The method has adequate sensitivity, specificity and reproducibility to make it useful for clinical research with ayahuasca. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  10. Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary

    NASA Astrophysics Data System (ADS)

    Liu, Wuying; Wang, Lin

    2018-03-01

    The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.

  11. On the joint inversion of geophysical data for models of the coupled core-mantle system

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1991-01-01

    Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.

  12. Numerical simulation of fluid flow through simplified blade cascade with prescribed harmonic motion using discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Vimmr, Jan; Bublík, Ondřej; Prausová, Helena; Hála, Jindřich; Pešek, Luděk

    2018-06-01

    This paper deals with a numerical simulation of compressible viscous fluid flow around three flat plates with prescribed harmonic motion. This arrangement presents a simplified blade cascade with forward wave motion. The aim of this simulation is to determine the aerodynamic forces acting on the flat plates. The mathematical model describing this problem is formed by Favre-averaged system of Navier-Stokes equations in arbitrary Lagrangian-Eulerian (ALE) formulation completed by one-equation Spalart-Allmaras turbulence model. The simulation was performed using the developed in-house CFD software based on discontinuous Galerkin method, which offers high order of accuracy.

  13. Transcriptomic responses of a simplified soil microcosm to a plant pathogen and its biocontrol agent reveal a complex reaction to harsh habitat.

    PubMed

    Perazzolli, Michele; Herrero, Noemí; Sterck, Lieven; Lenzi, Luisa; Pellegrini, Alberto; Puopolo, Gerardo; Van de Peer, Yves; Pertot, Ilaria

    2016-10-27

    Soil microorganisms are key determinants of soil fertility and plant health. Soil phytopathogenic fungi are one of the most important causes of crop losses worldwide. Microbial biocontrol agents have been extensively studied as alternatives for controlling phytopathogenic soil microorganisms, but molecular interactions between them have mainly been characterised in dual cultures, without taking into account the soil microbial community. We used an RNA sequencing approach to elucidate the molecular interplay of a soil microbial community in response to a plant pathogen and its biocontrol agent, in order to examine the molecular patterns activated by the microorganisms. A simplified soil microcosm containing 11 soil microorganisms was incubated with a plant root pathogen (Armillaria mellea) and its biocontrol agent (Trichoderma atroviride) for 24 h under controlled conditions. More than 46 million paired-end reads were obtained for each replicate and 28,309 differentially expressed genes were identified in total. Pathway analysis revealed complex adaptations of soil microorganisms to the harsh conditions of the soil matrix and to reciprocal microbial competition/cooperation relationships. Both the phytopathogen and its biocontrol agent were specifically recognised by the simplified soil microcosm: defence reaction mechanisms and neutral adaptation processes were activated in response to competitive (T. atroviride) or non-competitive (A. mellea) microorganisms, respectively. Moreover, activation of resistance mechanisms dominated in the simplified soil microcosm in the presence of both A. mellea and T. atroviride. Biocontrol processes of T. atroviride were already activated during incubation in the simplified soil microcosm, possibly to occupy niches in a competitive ecosystem, and they were not further enhanced by the introduction of A. mellea. This work represents an additional step towards understanding molecular interactions between plant pathogens and biocontrol agents within a soil ecosystem. Global transcriptional analysis of the simplified soil microcosm revealed complex metabolic adaptation in the soil environment and specific responses to antagonistic or neutral intruders.

  14. Simplified DFT methods for consistent structures and energies of large systems

    NASA Astrophysics Data System (ADS)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  15. Experimental determination of the viscous flow permeability of porous materials by measuring reflected low frequency acoustic waves

    NASA Astrophysics Data System (ADS)

    Berbiche, A.; Sadouki, M.; Fellah, Z. E. A.; Ogam, E.; Fellah, M.; Mitri, F. G.; Depollier, C.

    2016-01-01

    An acoustic reflectivity method is proposed for measuring the permeability or flow resistivity of air-saturated porous materials. In this method, a simplified expression of the reflection coefficient is derived in the Darcy's regime (low frequency range), which does not depend on frequency and porosity. Numerical simulations show that the reflection coefficient of a porous material can be approximated by its simplified expression obtained from its Taylor development to the first order. This approximation is good especially for resistive materials (of low permeability) and for the lower frequencies. The permeability is reconstructed by solving the inverse problem using waves reflected by plastic foam samples, at different frequency bandwidths in the Darcy regime. The proposed method has the advantage of being simple compared to the conventional methods that use experimental reflected data, and is complementary to the transmissivity method, which is more adapted to low resistive materials (high permeability).

  16. Induced simplified neutrosophic correlated aggregation operators for multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Zhang, Hong-yu

    2018-03-01

    Induced Choquet integral is a powerful tool to deal with imprecise or uncertain nature. This study proposes a combination process of the induced Choquet integral and neutrosophic information. We first give the operational properties of simplified neutrosophic numbers (SNNs). Then, we develop some new information aggregation operators, including an induced simplified neutrosophic correlated averaging (I-SNCA) operator and an induced simplified neutrosophic correlated geometric (I-SNCG) operator. These operators not only consider the importance of elements or their ordered positions, but also take into account the interactions phenomena among decision criteria or their ordered positions under multiple decision-makers. Moreover, we present a detailed analysis of I-SNCA and I-SNCG operators, including the properties of idempotency, commutativity and monotonicity, and study the relationships among the proposed operators and existing simplified neutrosophic aggregation operators. In order to handle the multi-criteria group decision-making (MCGDM) situations where the weights of criteria and decision-makers usually correlative and the criterion values are considered as SNNs, an approach is established based on I-SNCA operator. Finally, a numerical example is presented to demonstrate the proposed approach and to verify its effectiveness and practicality.

  17. IGA: A Simplified Introduction and Implementation Details for Finite Element Users

    NASA Astrophysics Data System (ADS)

    Agrawal, Vishal; Gautam, Sachin S.

    2018-05-01

    Isogeometric analysis (IGA) is a recently introduced technique that employs the Computer Aided Design (CAD) concept of Non-uniform Rational B-splines (NURBS) tool to bridge the substantial bottleneck between the CAD and finite element analysis (FEA) fields. The simplified transition of exact CAD models into the analysis alleviates the issues originating from geometrical discontinuities and thus, significantly reduces the design-to-analysis time in comparison to traditional FEA technique. Since its origination, the research in the field of IGA is accelerating and has been applied to various problems. However, the employment of CAD tools in the area of FEA invokes the need of adapting the existing implementation procedure for the framework of IGA. Also, the usage of IGA requires the in-depth knowledge of both the CAD and FEA fields. This can be overwhelming for a beginner in IGA. Hence, in this paper, a simplified introduction and implementation details for the incorporation of NURBS based IGA technique within the existing FEA code is presented. It is shown that with little modifications, the available standard code structure of FEA can be adapted for IGA. For the clear and concise explanation of these modifications, step-by-step implementation of a benchmark plate with a circular hole under the action of in-plane tension is included.

  18. Performance modeling and valuation of snow-covered PV systems: examination of a simplified approach to decrease forecasting error.

    PubMed

    Bosman, Lisa B; Darling, Seth B

    2018-06-01

    The advent of modern solar energy technologies can improve the costs of energy consumption on a global, national, and regional level, ultimately spanning stakeholders from governmental entities to utility companies, corporations, and residential homeowners. For those stakeholders experiencing the four seasons, accurately accounting for snow-related energy losses is important for effectively predicting photovoltaic performance energy generation and valuation. This paper provides an examination of a new, simplified approach to decrease snow-related forecasting error, in comparison to current solar energy performance models. A new method is proposed to allow model designers, and ultimately users, the opportunity to better understand the return on investment for solar energy systems located in snowy environments. The new method is validated using two different sets of solar energy systems located near Green Bay, WI, USA: a 3.0-kW micro inverter system and a 13.2-kW central inverter system. Both systems were unobstructed, facing south, and set at a tilt of 26.56°. Data were collected beginning in May 2014 (micro inverter system) and October 2014 (central inverter system), through January 2018. In comparison to reference industry standard solar energy prediction applications (PVWatts and PVsyst), the new method results in lower mean absolute percent errors per kilowatt hour of 0.039 and 0.055%, respectively, for the micro inverter system and central inverter system. The statistical analysis provides support for incorporating this new method into freely available, online, up-to-date prediction applications, such as PVWatts and PVsyst.

  19. Laboratory longitudinal diffusion tests: 1. Dimensionless formulations and validity of simplified solutions

    NASA Astrophysics Data System (ADS)

    Takeda, M.; Nakajima, H.; Zhang, M.; Hiratsuka, T.

    2008-04-01

    To obtain reliable diffusion parameters for diffusion testing, multiple experiments should not only be cross-checked but the internal consistency of each experiment should also be verified. In the through- and in-diffusion tests with solution reservoirs, test interpretation of different phases often makes use of simplified analytical solutions. This study explores the feasibility of steady, quasi-steady, equilibrium and transient-state analyses using simplified analytical solutions with respect to (i) valid conditions for each analytical solution, (ii) potential error, and (iii) experimental time. For increased generality, a series of numerical analyses are performed using unified dimensionless parameters and the results are all related to dimensionless reservoir volume (DRV) which includes only the sorptive parameter as an unknown. This means the above factors can be investigated on the basis of the sorption properties of the testing material and/or tracer. The main findings are that steady, quasi-steady and equilibrium-state analyses are applicable when the tracer is not highly sorptive. However, quasi-steady and equilibrium-state analyses become inefficient or impractical compared to steady state analysis when the tracer is non-sorbing and material porosity is significantly low. Systematic and comprehensive reformulation of analytical models enables the comparison of experimental times between different test methods. The applicability and potential error of each test interpretation can also be studied. These can be applied in designing, performing, and interpreting diffusion experiments by deducing DRV from the available information for the target material and tracer, combined with the results of this study.

  20. Methods of predicting aggregate voids.

    DOT National Transportation Integrated Search

    2013-03-01

    Percent voids in combined aggregates vary significantly. Simplified methods of predicting aggregate : voids were studied to determine the feasibility of a range of gradations using aggregates available in Kansas. : The 0.45 Power Curve Void Predictio...

Top