Sample records for methods show considerable

  1. Survey Shows Variation in Ph.D. Methods Training.

    ERIC Educational Resources Information Center

    Steeves, Leslie; And Others

    1983-01-01

    Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)

  2. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  3. An evaluation of authentication methods for smartphone based on users’ preferences

    NASA Astrophysics Data System (ADS)

    Sari, P. K.; Ratnasari, G. S.; Prasetio, A.

    2016-04-01

    This study discusses about smartphone screen lock preferences using some types of authentication methods. The purpose is to determine the user behaviours based on the perceived security and convenience, as well as the preferences for different types of authentication methods. Variables used are the considerations for locking the screens and the types of authentication methods. The population consists of the smartphone users with the total samples of 400 respondents within a nonprobability sampling method. Data analysis method used is the descriptive analysis. The results showed that the convenience factor is still the major consideration for locking the smartphone screens. Majority of the users chose the pattern unlock as the most convenient method to use. Meanwhile, fingerprint unlock becomes the most secure method in the users’ perceptions and as the method chosen to be used in the future.

  4. Technical Considerations for Improvement of USAF Operational Training, Testing and Evaluation (OTT and E)

    DTIC Science & Technology

    1975-06-30

    Needs 24 3-4 Needs 25 3-5 EW Scoring Considerations 28 3-6 Risk vs Benefit - Voluntary & 30 Involuntary Exposure 3-7 Mapping of Dollar Loss to the Air...method? Of this new jammer? Or this new tactic? Or this new airplane?" Can all be answer3d by showing how aircraft losses were lessened, while...30 approach does not present a cost basis to determine the dollar worth of safety improvements. Figure 3-7 shows the average dollar loss of

  5. A Review of Web Information Seeking Research: Considerations of Method and Foci of Interest

    ERIC Educational Resources Information Center

    Martzoukou, Konstantina

    2005-01-01

    Introduction: This review shows that Web information seeking research suffers from inconsistencies in method and a lack of homogeneity in research foci. Background: Qualitative and quantitative methods are needed to produce a comprehensive view of information seeking. Studies also recommend observation as one of the most fundamental ways of…

  6. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  7. Conventional and improved cytotoxicity test methods of newly developed biodegradable magnesium alloys

    NASA Astrophysics Data System (ADS)

    Han, Hyung-Seop; Kim, Hee-Kyoung; Kim, Yu-Chan; Seok, Hyun-Kwang; Kim, Young-Yul

    2015-11-01

    Unique biodegradable property of magnesium has spawned countless studies to develop ideal biodegradable orthopedic implant materials in the last decade. However, due to the rapid pH change and extensive amount of hydrogen gas generated during biocorrosion, it is extremely difficult to determine the accurate cytotoxicity of newly developed magnesium alloys using the existing methods. Herein, we report a new method to accurately determine the cytotoxicity of magnesium alloys with varying corrosion rate while taking in-vivo condition into the consideration. For conventional method, extract quantities of each metal ion were determined using ICP-MS and the result showed that the cytotoxicity due to pH change caused by corrosion affected the cell viability rather than the intrinsic cytotoxicity of magnesium alloy. In physiological environment, pH is regulated and adjusted within normal pH (˜7.4) range by homeostasis. Two new methods using pH buffered extracts were proposed and performed to show that environmental buffering effect of pH, dilution of the extract, and the regulation of eluate surface area must be taken into consideration for accurate cytotoxicity measurement of biodegradable magnesium alloys.

  8. n-Gram-Based Indexing for Korean Text Retrieval.

    ERIC Educational Resources Information Center

    Lee, Joon Ho; Cho, Hyun Yang; Park, Hyouk Ro

    1999-01-01

    Discusses indexing methods in Korean text retrieval and proposes a new indexing method based on n-grams which can handle compound nouns effectively without dictionaries and complex linguistic knowledge. Experimental results show that n-gram-based indexing is considerably faster than morpheme-based indexing, and also provides better retrieval…

  9. Building Regression Models: The Importance of Graphics.

    ERIC Educational Resources Information Center

    Dunn, Richard

    1989-01-01

    Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)

  10. Dynamic thiol/disulphide homeostasis in patients with basal cell carcinoma.

    PubMed

    Demirseren, Duriye Deniz; Cicek, Cagla; Alisik, Murat; Demirseren, Mustafa Erol; Aktaş, Akın; Erel, Ozcan

    2017-09-01

    The aim of this study is to measure and compare the dynamic thiol/disulphide homeostasis of patients with basal cell carcinoma and healthy subjects with a newly developed and original method. Thirty four patients attending our outpatient clinic and clinically and histopathologically diagnosed as nodular basal cell carcinoma, and age and gender matched 30 healthy individuals have been involved in the study. Thiol/disulphide homeostasis tests have been measured with a novel automatic spectrophotometric method developed and the results have been compared statistically. Serum native thiol and disulphide levels in the patient and control group show a considerable variance statistically (p = 0.028, 0.039, respectively). Total thiol levels do not reveal a considerable variation (p = 0.094). Disulphide/native thiol ratios and native thiol/total thiol ratios also show a considerable variance statistically (p = 0.012, 0.013, 0.010, respectively). Thiol disulphide homeostasis in patients with basal cell carcinoma alters in the way that disulphide gets lower and thiols get higher. Thiol/disulphide level is likely to have a role in basal cell carcinoma pathogenesis.

  11. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  12. Fremdsprachenerwerb in einer individualisierten Lernsituation. Eine Beschreibung von Lernverhalten (Foreign Language Acquisition in an Individualized Learning Situation. A Description of Learning Behavior)

    ERIC Educational Resources Information Center

    Extra, G.

    1974-01-01

    The introduction reviews and compares the audiolingual and cognitive code-learning methods. An experiment was conducted using audiolingual methods to show that learning behavior diverges considerably from the expectations set up by that method. Several charts and diagrams present the analyzed results. (Text is in German.) See FL 507 969 for…

  13. [Screening for cancer - economic consideration and cost-effectiveness].

    PubMed

    Kjellberg, Jakob

    2014-06-09

    Cost-effectiveness analysis has become an accepted method to evaluate medical technology and allocate scarce health-care resources. Published decision analyses show that screening for cancer in general is cost-effective. However, cost-effectiveness analyses are only as good as the clinical data and the results are sensitive to the chosen methods and perspective of the analysis.

  14. Indispensable finite time corrections for Fokker-Planck equations from time series data.

    PubMed

    Ragwitz, M; Kantz, H

    2001-12-17

    The reconstruction of Fokker-Planck equations from observed time series data suffers strongly from finite sampling rates. We show that previously published results are degraded considerably by such effects. We present correction terms which yield a robust estimation of the diffusion terms, together with a novel method for one-dimensional problems. We apply these methods to time series data of local surface wind velocities, where the dependence of the diffusion constant on the state variable shows a different behavior than previously suggested.

  15. A Hydrothermal Study of Wachusett Reservoir with Considerations of Water Quality Management

    DTIC Science & Technology

    1989-05-01

    of Water Quality Management Techniques 108 1 5.1 Current operational management techniques 108 5.2 Copper toxicity and considerations for algicide ...sulfate (CuSO 4) is applied to the epilimnion of the reservoir. The method of treatment consists of dragging burlap sacks of the algicide crystal through...Figure 5.2 shows the application rate for the Fall of 1987 amounting to over 20 tons of algicide applied for the fall period. In addition to a sampling

  16. Mutual information based feature selection for medical image retrieval

    NASA Astrophysics Data System (ADS)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  17. Situational and Generalised Conduct Problems and Later Life Outcomes: Evidence from a New Zealand Birth Cohort

    ERIC Educational Resources Information Center

    Fergusson, David M.; Boden, Joseph M.; Horwood, L. John

    2009-01-01

    Background: There is considerable evidence suggesting that many children show conduct problems that are specific to a given context (home; school). What is less well understood is the extent to which children with situation-specific conduct problems show similar outcomes to those with generalised conduct problems. Methods: Data were gathered as…

  18. Assessment of alternative disposal methods to reduce greenhouse gas emissions from municipal solid waste in India.

    PubMed

    Yedla, Sudhakar; Sindhu, N T

    2016-06-01

    Open dumping, the most commonly practiced method of solid waste disposal in Indian cities, creates serious environment and economic challenges, and also contributes significantly to greenhouse gas emissions. The present article attempts to analyse and identify economically effective ways to reduce greenhouse gas emissions from municipal solid waste. The article looks at the selection of appropriate methods for the control of methane emissions. Multivariate functional models are presented, based on theoretical considerations as well as the field measurements to forecast the greenhouse gas mitigation potential for all the methodologies under consideration. Economic feasibility is tested by calculating the unit cost of waste disposal for the respective disposal process. The purpose-built landfill system proposed by Yedla and Parikh has shown promise in controlling greenhouse gas and saving land. However, these studies show that aerobic composting offers the optimal method, both in terms of controlling greenhouse gas emissions and reducing costs, mainly by requiring less land than other methods. © The Author(s) 2016.

  19. Elucidating the DEP phenomena using a volumetric polarization approach with consideration of the electric double layer

    PubMed Central

    Brcka, Jozef; Faguet, Jacques; Zhang, Guigen

    2017-01-01

    Dielectrophoretic (DEP) phenomena have been explored to great success for various applications like particle sorting and separation. To elucidate the underlying mechanism and quantify the DEP force experienced by particles, the point-dipole and Maxwell Stress Tensor (MST) methods are commonly used. However, both methods exhibit their own limitations. For example, the point-dipole method is unable to fully capture the essence of particle-particle interactions and the MST method is not suitable for particles of non-homogeneous property. Moreover, both methods fare poorly when it comes to explaining DEP phenomena such as the dependence of crossover frequency on medium conductivity. To address these limitations, the authors have developed a new method, termed volumetric-integration method, with the aid of computational implementation, to reexamine the DEP phenomena, elucidate the governing mechanism, and quantify the DEP force. The effect of an electric double layer (EDL) on particles' crossover behavior is dealt with through consideration of the EDL structure along with surface ionic/molecular adsorption, unlike in other methods, where the EDL is accounted for through simply assigning a surface conductance value to the particles. For validation, by comparing with literature experimental data, the authors show that the new method can quantify the DEP force on not only homogeneous particles but also non-homogeneous ones, and predict particle-particle interactions fairly accurately. Moreover, the authors also show that the predicted dependence of crossover frequency on medium conductivity and particle size agrees very well with experimental measurements. PMID:28396710

  20. Influence of forest and rangeland management on anadromous fish habitat in Western North America: economic considerations.

    Treesearch

    William R. tech. ed. Meehan

    1985-01-01

    Although many effects of forest and rangeland management on anadromous fisheries are difficult to measure, economic methods for the evaluation of costs and benefits can be helpful. Such methods can be used to address questions of equity as well as efficiency. Evaluations of equity can show who bears the costs and who captures the benefits of management actions, but...

  1. Effects of Antismoking Advertising–Based Beliefs on Adult Smokers’ Consideration of Quitting

    PubMed Central

    Netemeyer, Richard G.; Andrews, J. Craig; Burton, Scot

    2005-01-01

    Objectives. We examined whether specific antismoking advertising–based beliefs regarding the addictiveness of smoking, the dangers of environmental tobacco smoke, and the tobacco industry’s use of deceptive advertising practices are associated with adult smokers’ consideration of quitting. We also assessed whether interactions between such beliefs and having children living in the home were associated with consideration of quitting. Methods. We used analyses of smokers’ responses to a telephone survey conducted after completion of the Wisconsin Anti-Tobacco Media Campaign to test hypotheses associated with our study objectives. Results. Results indicated that advertising-based beliefs regarding smoking addictiveness and the dangers of environmental tobacco smoke were associated with consideration of quitting. The findings also showed that consideration of quitting was positively affected by the interaction between number of children living at home and advertising-based beliefs about deceptive tobacco industry advertising practices designed to induce people to smoke. Conclusions. Creating advertisements that target specific antismoking beliefs may be the most effective approach to enhancing consideration of quitting among adult smokers, particularly those with children living at home. PMID:15914834

  2. A Novel Approach to Enhance the Mechanical Strength and Electrical and Thermal Conductivity of Cu-GNP Nanocomposites

    NASA Astrophysics Data System (ADS)

    Saboori, Abdollah; Pavese, Matteo; Badini, Claudio; Fino, Paolo

    2018-01-01

    Copper/graphene nanoplatelet (GNP) nanocomposites were produced by a wet mixing method followed by a classical powder metallurgy technique. A qualitative evaluation of the structure of graphene after mixing indicated that wet mixing is an appropriate dispersion method. Thereafter, the effects of two post-processing techniques such as repressing-annealing and hot isostatic pressing (HIP) on density, interfacial bonding, hardness, and thermal and electrical conductivity of the nanocomposites were analyzed. Density evaluations showed that the relative density of specimens increased after the post-processing steps so that after HIPing almost full densification was achieved. The Vickers hardness of specimens increased considerably after the post-processing techniques. The thermal conductivity of pure copper was very low in the case of the as-sintered samples containing 2 to 3 pct porosity and increased considerably to a maximum value in the case of HIPed samples which contained only 0.1 to 0.2 pct porosity. Electrical conductivity measurements showed that by increasing the graphene content electrical conductivity decreased.

  3. Analysis and modification of theory for impact of seaplanes on water

    NASA Technical Reports Server (NTRS)

    Mayo, Wilbur L

    1945-01-01

    An analysis of available theory on seaplane impact and a proposed modification thereto are presented. In previous methods the overall momentum of the float and virtual mass has been assumed to remain constant during the impact but the present analysis shows that this assumption is rigorously correct only when the resultant velocity of the float is normal to the keel. The proposed modification chiefly involves consideration of the fact that forward velocity of the seaplane float causes momentum to be passed into the hydrodynamic downwash (an action that is the entire consideration in the case of the planing float) and consideration of the fact that, for an impact with trim, the rate of penetration is determined not only by the velocity component normal to the keel but also by the velocity component parallel to the keel, which tends to reduce the penetration. Experimental data for planing, oblique impact, and vertical drop are used to show that the accuracy of the proposed theory is good.

  4. The free energy landscape of small peptides as obtained from metadynamics with umbrella sampling corrections

    PubMed Central

    Babin, Volodymyr; Roland, Christopher; Darden, Thomas A.; Sagui, Celeste

    2007-01-01

    There is considerable interest in developing methodologies for the accurate evaluation of free energies, especially in the context of biomolecular simulations. Here, we report on a reexamination of the recently developed metadynamics method, which is explicitly designed to probe “rare events” and areas of phase space that are typically difficult to access with a molecular dynamics simulation. Specifically, we show that the accuracy of the free energy landscape calculated with the metadynamics method may be considerably improved when combined with umbrella sampling techniques. As test cases, we have studied the folding free energy landscape of two prototypical peptides: Ace-(Gly)2-Pro-(Gly)3-Nme in vacuo and trialanine solvated by both implicit and explicit water. The method has been implemented in the classical biomolecular code AMBER and is to be distributed in the next scheduled release of the code. © 2006 American Institute of Physics. PMID:17144742

  5. [Current results of nitrogen cryotherapy in eyelid basaliomas].

    PubMed

    Buschmann, W; Linnert, D; Wünsch, P H; Schmutzler, M

    1986-10-01

    By means of long-term follow-ups of large numbers of patients it has been established that nitrogen cryotherapy for lid basaliomas produces very good results with regard to the cure rate, as well as having considerable advantages over other treatment methods. In contrast to other authors we did not employ the spray method, but a very high-performance nitrogen cryo unit with a closed probe. Experimental measurements showed that this unit is capable of generating at least the same temperatures as with the spray method. The cryoapplication technique is described. The cure rate and causes of recurrence in the first series in the total of 84 patients treated from 1979 to 1983 were evaluated by long-term follow-up. If cryobiological principles are observed and the recommended application technique is adhered to, the same cure rate can be achieved as with the spray method and other forms of treatment. There are considerable functional and cosmetic advantages, also as regards the patency of the lacrimal ducts.

  6. The equivalence of Darmois-Israel and distributional method for thin shells in general relativity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansouri, R.; Khorrami, M.

    1996-11-01

    A distributional method to solve the Einstein{close_quote}s field equations for thin shells is formulated. The familiar field equations and jump conditions of Darmois-Israel formalism are derived. A careful analysis of the Bianchi identities shows that, for cases under consideration, they make sense as distributions and lead to jump conditions of Darmois-Israel formalism. {copyright} {ital 1996 American Institute of Physics.}

  7. The role of interest and inflation rates in life-cycle cost analysis

    NASA Technical Reports Server (NTRS)

    Eisenberger, I.; Remer, D. S.; Lorden, G.

    1978-01-01

    The effect of projected interest and inflation rates on life cycle cost calculations is discussed and a method is proposed for making such calculations which replaces these rates by a single parameter. Besides simplifying the analysis, the method clarifies the roles of these rates. An analysis of historical interest and inflation rates from 1950 to 1976 shows that the proposed method can be expected to yield very good projections of life cycle cost even if the rates themselves fluctuate considerably.

  8. Acceleration of low order finite element computation with GPUs (Invited)

    NASA Astrophysics Data System (ADS)

    Knepley, M. G.

    2010-12-01

    Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.

  9. 78 FR 52854 - Use of Differential Income Stream as an Application of the Income Method and as a Consideration...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-27

    ... Differential Income Stream as an Application of the Income Method and as a Consideration in Assessing the Best Method AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Final regulations and removal of... differential income stream as a consideration in assessing the best method in connection with a cost sharing...

  10. 76 FR 80309 - Use of Differential Income Stream as an Application of the Income Method and as a Consideration...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-23

    ... Use of Differential Income Stream as an Application of the Income Method and as a Consideration in Assessing the Best Method AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of proposed... guidance on how an analysis of the differential income stream may provide a best method consideration for...

  11. Printed Arabic optical character segmentation

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Ayyesh, Muna; Qaroush, Aziz; Tumar, Iyad

    2015-03-01

    A considerable progress in recognition techniques for many non-Arabic characters has been achieved. In contrary, few efforts have been put on the research of Arabic characters. In any Optical Character Recognition (OCR) system the segmentation step is usually the essential stage in which an extensive portion of processing is devoted and a considerable share of recognition errors is attributed. In this research, a novel segmentation approach for machine Arabic printed text with diacritics is proposed. The proposed method reduces computation, errors, gives a clear description for the sub-word and has advantages over using the skeleton approach in which the data and information of the character can be lost. Both of initial evaluation and testing of the proposed method have been developed using MATLAB and shows 98.7% promising results.

  12. Micro-scale temperature measurement method using fluorescence polarization

    NASA Astrophysics Data System (ADS)

    Tatsumi, K.; Hsu, C.-H.; Suzuki, A.; Nakabe, K.

    2016-09-01

    A novel method that can measure the fluid temperature in microscopic scale by measuring the fluorescence polarization is described in this paper. The measurement technique is not influenced by the quenching effects which appears in conventional LIF methods and is believed to show a higher reliability in temperature measurements. Experiment was performed using a microchannel flow and fluorescent molecule probes, and the effects of the fluid temperature, fluid viscosity, measurement time, and pH of the solution on the measured fluorescence polarization degree are discussed to understand the basic characteristics of the present method. The results showed that fluorescence polarization is considerably less sensible to these quenching factors. A good correlation with the fluid temperature, on the other hand, was obtained and agreed well with the theoretical values confirming the feasibility of the method.

  13. Electronic-nose devices - Potential for noninvasive early disease-detection applications

    Treesearch

    Alphus Dan Wilson

    2017-01-01

    Significant progress in the development of portable electronic devices is showing considerable promise to facilitate clinical diagnostic processes. The increasing global trend of shifts in healthcare policies and priorities toward shortening and improving the effectiveness of diagnostic procedures by utilizing non-invasive methods should provide multiple benefits of...

  14. Correcting a Metacognitive Error: Feedback Increases Retention of Low-Confidence Correct Responses

    ERIC Educational Resources Information Center

    Butler, Andrew C.; Karpicke, Jeffrey D.; Roediger, Henry L., III

    2008-01-01

    Previous studies investigating posttest feedback have generally conceptualized feedback as a method for correcting erroneous responses, giving virtually no consideration to how feedback might promote learning of correct responses. Here, the authors show that when correct responses are made with low confidence, feedback serves to correct this…

  15. Destructive Leadership Behaviors and Workplace Attitudes in Schools

    ERIC Educational Resources Information Center

    Woestman, Daniel S.; Wasonga, Teresa Akinyi

    2015-01-01

    The study investigated destructive leadership behaviors (DLBs) and their influence on K-12 workplace attitudes (subordinate consideration for leaving their job, job satisfaction, and levels of stress). Quantitative survey method was used to gather data from experienced professional educators. Analyses of data show that the practice of DLB exists…

  16. Early Childhood Aetiology of Mental Health Problems: A Longitudinal Population-Based Study

    ERIC Educational Resources Information Center

    Bayer, Jordana K.; Hiscock, Harriet; Ukoumunne, Obioha C.; Price, Anna; Wake, Melissa

    2008-01-01

    Background: Mental health problems comprise an international public health issue affecting up to 20% of children and show considerable stability. We aimed to identify child, parenting, and family predictors from infancy in the development of externalising and internalising behaviour problems by age 3 years. Methods: "Design"…

  17. Operational considerations for laminar flow aircraft

    NASA Technical Reports Server (NTRS)

    Maddalon, Dal V.; Wagner, Richard D.

    1986-01-01

    Considerable progress has been made in the development of laminar flow technology for commercial transports during the NASA Aircraft Energy Efficiency (ACEE) laminar flow program. Practical, operational laminar flow control (LFC) systems have been designed, fabricated, and are undergoing flight testing. New materials, fabrication methods, analysis techniques, and design concepts were developed and show much promise. The laminar flow control systems now being flight tested on the NASA Jetstar aircraft are complemented by natural laminar flow flight tests to be accomplished with the F-14 variable-sweep transition flight experiment. An overview of some operational aspects of this exciting program is given.

  18. The relationship between trait self-control, consideration for future consequence and organizational citizenship behavior among Chinese employees.

    PubMed

    Wang, Yu-Jie; Dou, Kai; Tang, Zhi-Wen

    2017-01-01

    Organizational citizenship behavior (OCB) is important to the development of an organization. Research into factors that foster OCB and the underlying processes are therefore substantially crucial. The current study aimed to test the association between trait self-control and OCB and the mediating role of consideration for future consequence. Four hundred and ninety-four Chinese employees (275 men, 219 women) took part in the study. Participants completed a battery of self-report measures online that assessed trait self-control, tendencies of consideration of future consequence, and organizational citizenship behavior. Path analysis was conducted and bootstrapping technique (N = 5000), a resampling method that is asymptotically more accurate than the standard intervals using sample variance and assumptions of normality, was used to judge the significance of the mediation. Results of path analysis showed that trait self-control was positively related to OCB. More importantly, the "trait self-control-OCB" link was mediated by consideration of future consequence-future, but not by consideration of future consequence-immediate. Employees with high trait self-control engage in more organizational citizenship behavior and this link can be partly explained by consideration of future consequence-future.

  19. Optical arc sensor using energy harvesting power source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Kyoo Nam, E-mail: knchoi@inu.ac.kr; Rho, Hee Hyuk, E-mail: rdoubleh0902@inu.ac.kr

    Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arcmore » energy levels, with a resolution below 17 J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.« less

  20. Optical arc sensor using energy harvesting power source

    NASA Astrophysics Data System (ADS)

    Choi, Kyoo Nam; Rho, Hee Hyuk

    2016-06-01

    Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arc energy levels, with a resolution below 17J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.

  1. Stability over Time of Different Methods of Estimating School Performance

    ERIC Educational Resources Information Center

    Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu

    2014-01-01

    This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…

  2. The Use of Mini-projects in the Teaching of Geotechnics to Civil Engineering Undergraduates.

    ERIC Educational Resources Information Center

    Anderson, W. F.; And Others

    1985-01-01

    Geotechnics (which encompasses soil and rock mechanics, engineering geology, foundation design, and ground engineering methods) is a major component of virtually all civil engineering courses. Show how mini-projects are used to teach this subject. Format of projects, development of presentation skills, and assessment considerations are discussed.…

  3. A comparison of juice extraction methods in the pungency measurement of onion bulbs.

    PubMed

    Yoo, Kil Sun; Lee, Eun Jin; Hamilton, Brian K; Patil, Bhimanagouda S

    2016-02-01

    Onion pungency is estimated by measuring the pyruvic acid content in juice extracted from fresh tissues. We compared pyruvic acid content and its variation in the juices extracted by the pressing, maceration, blending with no water, or blending with water (blend/water) methods. There were considerable differences in the pyruvic acid content and coefficient of variation (CV) among these methods, and there was an interaction between the onion cultivars and the juice extraction methods. The pressing method showed over 30% CV in the quartered or composite samples. The blend/water method showed the greatest pyruvic acid content in the shortday-type ('TG1015Y' and 'Texas Early White') onions, while the pressing method showed the greatest pyruvic acid content in the longday-type onions. The blend/water method, which gave ratios between 1:1 and 1:4 (w/w), showed the same pyruvic acid content. The blending (no water) method had the highest correlation, followed by the maceration method. The lowest correlations were found with the pressing method and the blend/water method. Complete homogenisation of tissues with 1:1 or greater ratios of water was necessary for the maximum consistency and full development of the pyruvic acid reaction for onion pungency measurement. © 2015 Society of Chemical Industry.

  4. Elastic Face, An Anatomy-Based Biometrics Beyond Visible Cue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Kundu, S J

    2004-03-29

    This paper describes a face recognition method that is designed based on the consideration of anatomical and biomechanical characteristics of facial tissues. Elastic strain pattern inferred from face expression can reveal an individual's biometric signature associated with the underlying anatomical structure, and thus has the potential for face recognition. A method based on the continuum mechanics in finite element formulation is employed to compute the strain pattern. Experiments show very promising results. The proposed method is quite different from other face recognition methods and both its advantages and limitations, as well as future research for improvement are discussed.

  5. Subtraction method in the Second Random Phase Approximation

    NASA Astrophysics Data System (ADS)

    Gambacurta, Danilo

    2018-02-01

    We discuss the subtraction method applied to the Second Random Phase Approximation (SRPA). This method has been proposed to overcome double counting and stability issues appearing in beyond mean-field calculations. We show that the subtraction procedure leads to a considerable reduction of the SRPA downwards shift with respect to the random phase approximation (RPA) spectra and to results that are weakly cutoff dependent. Applications to the isoscalar monopole and quadrupole response in 16O and to the low-lying dipole response in 48Ca are shown and discussed.

  6. A methodology for modeling surface effects on stiff and soft solids

    NASA Astrophysics Data System (ADS)

    He, Jin; Park, Harold S.

    2017-09-01

    We present a computational method that can be applied to capture surface stress and surface tension-driven effects in both stiff, crystalline nanostructures, like size-dependent mechanical properties, and soft solids, like elastocapillary effects. We show that the method is equivalent to the classical Young-Laplace model. The method is based on converting surface tension and surface elasticity on a zero-thickness surface to an initial stress and corresponding elastic properties on a finite thickness shell, where the consideration of geometric nonlinearity enables capturing the out-of-plane component of the surface tension that results for curved surfaces through evaluation of the surface stress in the deformed configuration. In doing so, we are able to use commercially available finite element technology, and thus do not require consideration and implementation of the classical Young-Laplace equation. Several examples are presented to demonstrate the capability of the methodology for modeling surface stress in both soft solids and crystalline nanostructures.

  7. A methodology for modeling surface effects on stiff and soft solids

    NASA Astrophysics Data System (ADS)

    He, Jin; Park, Harold S.

    2018-06-01

    We present a computational method that can be applied to capture surface stress and surface tension-driven effects in both stiff, crystalline nanostructures, like size-dependent mechanical properties, and soft solids, like elastocapillary effects. We show that the method is equivalent to the classical Young-Laplace model. The method is based on converting surface tension and surface elasticity on a zero-thickness surface to an initial stress and corresponding elastic properties on a finite thickness shell, where the consideration of geometric nonlinearity enables capturing the out-of-plane component of the surface tension that results for curved surfaces through evaluation of the surface stress in the deformed configuration. In doing so, we are able to use commercially available finite element technology, and thus do not require consideration and implementation of the classical Young-Laplace equation. Several examples are presented to demonstrate the capability of the methodology for modeling surface stress in both soft solids and crystalline nanostructures.

  8. Minimum maximum temperature gradient coil design.

    PubMed

    While, Peter T; Poole, Michael S; Forbes, Larry K; Crozier, Stuart

    2013-08-01

    Ohmic heating is a serious problem in gradient coil operation. A method is presented for redesigning cylindrical gradient coils to operate at minimum peak temperature, while maintaining field homogeneity and coil performance. To generate these minimaxT coil windings, an existing analytic method for simulating the spatial temperature distribution of single layer gradient coils is combined with a minimax optimization routine based on sequential quadratic programming. Simulations are provided for symmetric and asymmetric gradient coils that show considerable improvements in reducing maximum temperature over existing methods. The winding patterns of the minimaxT coils were found to be heavily dependent on the assumed thermal material properties and generally display an interesting "fish-eye" spreading of windings in the dense regions of the coil. Small prototype coils were constructed and tested for experimental validation and these demonstrate that with a reasonable estimate of material properties, thermal performance can be improved considerably with negligible change to the field error or standard figures of merit. © 2012 Wiley Periodicals, Inc.

  9. Consideration of measurement error when using commercial indoor radon determinations for selecting radon action levels

    USGS Publications Warehouse

    Reimer, G.M.; Szarzi, S.L.; Dolan, Michael P.

    1998-01-01

    An examination of year-long, in-home radon measurement in Colorado from commercial companies applying typical methods indicates that considerable variation in precision exists. This variation can have a substantial impact on any mitigation decisions, either voluntary or mandated by law, especially regarding property sale or exchange. Both long-term exposure (nuclear track greater than 90 days), and short-term (charcoal adsorption 4-7 days) exposure methods were used. In addition, periods of continuous monitoring with a highly calibrated alpha-scintillometer took place for accuracy calibration. The results of duplicate commercial analysis show that typical results are no better than ??25 percent with occasional outliers (up to 5 percent of all analyses) well beyond that limit. Differential seasonal measurements (winter/summer) by short-term methods provide equivalent information to single long-term measurements. Action levels in the U.S. for possible mitigation decisions should be selected so that they consider the measurement variability; specifically, they should reflect a concentration range similar to that adopted by the European Community.

  10. Possibility of spoof attack against robustness of multibiometric authentication systems

    NASA Astrophysics Data System (ADS)

    Hariri, Mahdi; Shokouhi, Shahriar Baradaran

    2011-07-01

    Multibiometric systems have been recently developed in order to overcome some weaknesses of single biometric authentication systems, but security of these systems against spoofing has not received enough attention. In this paper, we propose a novel practical method for simulation of possibilities of spoof attacks against a biometric authentication system. Using this method, we model matching scores from standard to completely spoofed genuine samples. Sum, product, and Bayes fusion rules are applied for score level combination. The security of multimodal authentication systems are examined and compared with the single systems against various spoof possibilities. However, vulnerability of fused systems is considerably increased against spoofing, but their robustness is generally higher than single matcher systems. In this paper we show that robustness of a combined system is not always higher than a single system against spoof attack. We propose empirical methods for upgrading the security of multibiometric systems, which contain how to organize and select biometric traits and matchers against various possibilities of spoof attack. These methods provide considerable robustness and present an appropriate reason for using combined systems against spoof attacks.

  11. Design considerations of electromagnetic force in a direct drive permanent magnet brushless motor

    NASA Astrophysics Data System (ADS)

    Chen, H. S.; Tsai, M. C.

    2008-04-01

    In this paper, a numerical study of electromagnetic force associated with the width of stator teeth, width of rotor back iron, and slot opening for a ten-pole nine-slot direct drive permanent magnet brushless motor is presented. The study calculates the amplitude of the electromagnetic force on the rotating rotor by using the finite-element method. The results show that the amplitude of electromagnetic force, which may cause the noise and vibration of motors, changes with the variation of these above mentioned three factors. The relationship between the considerations of output torque and the minimization of noise and vibration is also established in this paper.

  12. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  13. Expanding Reliability Generalization Methods with KR-21 Estimates: An RG Study of the Coopersmith Self-Esteem Inventory.

    ERIC Educational Resources Information Center

    Lane, Ginny G.; White, Amy E.; Henson, Robin K.

    2002-01-01

    Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…

  14. The credibility crisis in research: Can economics tools help?

    PubMed Central

    Gall, Thomas; Ioannidis, John P. A.; Maniadis, Zacharias

    2017-01-01

    The issue of nonreplicable evidence has attracted considerable attention across biomedical and other sciences. This concern is accompanied by an increasing interest in reforming research incentives and practices. How to optimally perform these reforms is a scientific problem in itself, and economics has several scientific methods that can help evaluate research reforms. Here, we review these methods and show their potential. Prominent among them are mathematical modeling and laboratory experiments that constitute affordable ways to approximate the effects of policies with wide-ranging implications. PMID:28445470

  15. The credibility crisis in research: Can economics tools help?

    PubMed

    Gall, Thomas; Ioannidis, John P A; Maniadis, Zacharias

    2017-04-01

    The issue of nonreplicable evidence has attracted considerable attention across biomedical and other sciences. This concern is accompanied by an increasing interest in reforming research incentives and practices. How to optimally perform these reforms is a scientific problem in itself, and economics has several scientific methods that can help evaluate research reforms. Here, we review these methods and show their potential. Prominent among them are mathematical modeling and laboratory experiments that constitute affordable ways to approximate the effects of policies with wide-ranging implications.

  16. Transfer Alignment Error Compensator Design Based on Robust State Estimation

    NASA Astrophysics Data System (ADS)

    Lyou, Joon; Lim, You-Chol

    This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.

  17. Factoring economic costs into conservation planning may not improve agreement over priorities for protection.

    PubMed

    Armsworth, Paul R; Jackson, Heather B; Cho, Seong-Hoon; Clark, Melissa; Fargione, Joseph E; Iacona, Gwenllian D; Kim, Taeyoung; Larson, Eric R; Minney, Thomas; Sutton, Nathan A

    2017-12-21

    Conservation organizations must redouble efforts to protect habitat given continuing biodiversity declines. Prioritization of future areas for protection is hampered by disagreements over what the ecological targets of conservation should be. Here we test the claim that such disagreements will become less important as conservation moves away from prioritizing areas for protection based only on ecological considerations and accounts for varying costs of protection using return-on-investment (ROI) methods. We combine a simulation approach with a case study of forests in the eastern United States, paying particular attention to how covariation between ecological benefits and economic costs influences agreement levels. For many conservation goals, agreement over spatial priorities improves with ROI methods. However, we also show that a reliance on ROI-based prioritization can sometimes exacerbate disagreements over priorities. As such, accounting for costs in conservation planning does not enable society to sidestep careful consideration of the ecological goals of conservation.

  18. Laminar and Turbulent Gaseous Diffusion Flames. Appendix C

    NASA Technical Reports Server (NTRS)

    Faeth, G. M.; Urban, D. L. (Technical Monitor); Yuan, Z.-G. (Technical Monitor)

    2001-01-01

    Recent measurements and predictions of the properties of homogeneous (gaseous) laminar and turbulent non-premixed (diffusion) flames are discussed, emphasizing results from both ground- and space-based studies at microgravity conditions. Initial considerations show that effects of buoyancy not only complicate the interpretation of observations of diffusion flames but at times mislead when such results are applied to the non-buoyant diffusion flame conditions of greatest practical interest. This behavior motivates consideration of experiments where effects of buoyancy are minimized; therefore, methods of controlling the intrusion of buoyancy during observations of non-premixed flames are described, considering approaches suitable for both normal laboratory conditions as well as classical microgravity techniques. Studies of laminar flames at low-gravity and microgravity conditions are emphasized in view of the computational tractability of such flames for developing methods of predicting flame structure as well as the relevance of such flames to more practical turbulent flames by exploiting laminar flamelet concepts.

  19. Estimate of fine root production including the impact of decomposed roots in a Bornean tropical rainforest

    NASA Astrophysics Data System (ADS)

    Katayama, Ayumi; Khoon Koh, Lip; Kume, Tomonori; Makita, Naoki; Matsumoto, Kazuho; Ohashi, Mizue

    2016-04-01

    Considerable carbon is allocated belowground and used for respiration and production of roots. It is reported that approximately 40 % of GPP is allocated belowground in a Bornean tropical rainforest, which is much higher than those in Neotropical rainforests. This may be caused by high root production in this forest. Ingrowth core is a popular method for estimating fine root production, but recent study by Osawa et al. (2012) showed potential underestimates of this method because of the lack of consideration of the impact of decomposed roots. It is important to estimate fine root production with consideration for the decomposed roots, especially in tropics where decomposition rate is higher than other regions. Therefore, objective of this study is to estimate fine root production with consideration of decomposed roots using ingrowth cores and root litter-bag in the tropical rainforest. The study was conducted in Lambir Hills National Park in Borneo. Ingrowth cores and litter bags for fine roots were buried in March 2013. Eighteen ingrowth cores and 27 litter bags were collected in May, September 2013, March 2014 and March 2015, respectively. Fine root production was comparable to aboveground biomass increment and litterfall amount, and accounted only 10% of GPP in this study site, suggesting most of the carbon allocated to belowground might be used for other purposes. Fine root production was comparable to those in Neotropics. Decomposed roots accounted for 18% of fine root production. This result suggests that no consideration of decomposed fine roots may cause underestimate of fine root production.

  20. Conducting a qualitative child interview: methodological considerations.

    PubMed

    Kortesluoma, Riitta-Liisa; Hentinen, Maija; Nikkonen, Merja

    2003-06-01

    Studies of children have a long history, but the literature related to young children consists for the most part of studies on rather than with children and taking little account of what is regarded as significant and meaningful by children themselves. Researchers have relied almost exclusively on adults when collecting data about children's thoughts, feelings and experiences. Interviewing children, however, gives an opportunity to gain information about their subjective experiences. The purpose of this article is to illustrate the theoretical premises of child interviewing, as well as to describe some practical methodological solutions used during interviews. Factors that influence data gathered from children and strategies for taking these factors into consideration during the interview are also described. This paper is based on literature and the experience of one of the authors in interviewing children aged from 4 to 11 years about their experiences of pain. A consideration of literature dealing with the principles of child interviewing shows that there is surprisingly little guidance available on conversational methods involving children. The empirical and conceptual foundation for child interviewing is not very clear. Novice researchers especially may need recommendations about how to conduct a qualitative child interview. The method must suit both the purpose and the context.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, E.B. Jr.

    Various methods for the calculation of lower bounds for eigenvalues are examined, including those of Weinstein, Temple, Bazley and Fox, Gay, and Miller. It is shown how all of these can be derived in a unified manner by the projection technique. The alternate forms obtained for the Gay formula show how a considerably improved method can be readily obtained. Applied to the ground state of the helium atom with a simple screened hydrogenic trial function, this new method gives a lower bound closer to the true energy than the best upper bound obtained with this form of trial function. Possiblemore » routes to further improved methods are suggested.« less

  2. A Coarse-Grained Elastic Network Atom Contact Model and Its Use in the Simulation of Protein Dynamics and the Prediction of the Effect of Mutations

    PubMed Central

    Frappier, Vincent; Najmanovich, Rafael J.

    2014-01-01

    Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569

  3. Modified chloride diffusion model for concrete under the coupling effect of mechanical load and chloride salt environment

    NASA Astrophysics Data System (ADS)

    Lei, Mingfeng; Lin, Dayong; Liu, Jianwen; Shi, Chenghua; Ma, Jianjun; Yang, Weichao; Yu, Xiaoniu

    2018-03-01

    For the purpose of investigating lining concrete durability, this study derives a modified chloride diffusion model for concrete based on the odd continuation of boundary conditions and Fourier transform. In order to achieve this, the linear stress distribution on a sectional structure is considered, detailed procedures and methods are presented for model verification and parametric analysis. Simulation results show that the chloride diffusion model can reflect the effects of linear stress distribution of the sectional structure on the chloride diffusivity with reliable accuracy. Along with the natural environmental characteristics of practical engineering structures, reference value ranges of model parameters are provided. Furthermore, a chloride diffusion model is extended for the consideration of multi-factor coupling of linear stress distribution, chloride concentration and diffusion time. Comparison between model simulation and typical current research results shows that the presented model can produce better considerations with a greater universality.

  4. Ranking influential spreaders is an ill-defined problem

    NASA Astrophysics Data System (ADS)

    Gu, Jain; Lee, Sungmin; Saramäki, Jari; Holme, Petter

    2017-06-01

    Finding influential spreaders of information and disease in networks is an important theoretical problem, and one of considerable recent interest. It has been almost exclusively formulated as a node-ranking problem —methods for identifying influential spreaders output a ranking of the nodes. In this work, we show that such a greedy heuristic does not necessarily work: the set of most influential nodes depends on the number of nodes in the set. Therefore, the set of n most important nodes to vaccinate does not need to have any node in common with the set of n + 1 most important nodes. We propose a method for quantifying the extent and impact of this phenomenon. By this method, we show that it is a common phenomenon in both empirical and model networks.

  5. Increased Sensitivity of HIV-1 p24 ELISA Using a Photochemical Signal Amplification System.

    PubMed

    Bystryak, Simon; Santockyte, Rasa

    2015-10-01

    In this study we describe a photochemical signal amplification method (PSAM) for increasing of the sensitivity of enzyme-linked immunosorbent assay (ELISA) for determination of HIV-1 p24 antigen. The photochemical signal amplification method is based on an autocatalytic photochemical reaction of a horseradish peroxidase (HRP) substrate, orthophenylenediamine (OPD). To compare the performance of PSAM-boosted ELISA with a conventional colorimetric ELISA for determination of HIV-1 p24 antigen we employed a PerkinElmer HIV-1 p24 ELISA kit, using conventional ELISA alongside ELISA + PSAM. In the present study, we show that PSAM technology allows one to increase the analytical sensitivity and dynamic range of a commercial HIV-1 p24 ELISA kit, with and without immune-complex disruption, by a factor of approximately 40-fold. ELISA + PSAM is compatible with commercially available microtiter plate readers, requires only an inexpensive illumination device, and the PSAM amplification step takes no longer than 15 min. This method can be used for both commercially available and in-house ELISA tests, and has the advantage of being considerably simpler and less costly than alternative signal amplification methods. This method can be used for both commercially available and in-house ELISA tests, and has the advantage of being considerably simpler and less costly than alternative signal amplification methods.

  6. Forensic age estimation based on magnetic resonance imaging of third molars: converting 2D staging into 3D staging.

    PubMed

    De Tobel, Jannick; Hillewig, Elke; Verstraete, Koenraad

    2017-03-01

    Established methods to stage development of third molars for forensic age estimation are based on the evaluation of radiographs, which show a 2D projection. It has not been investigated whether these methods require any adjustments in order to apply them to stage third molars on magnetic resonance imaging (MRI), which shows 3D information. To prospectively study root stage assessment of third molars in age estimation using 3 Tesla MRI and to compare this with panoramic radiographs, in order to provide considerations for converting 2D staging into 3D staging and to determine the decisive root. All third molars were evaluated in 52 healthy participants aged 14-26 years using MRI in three planes. Three staging methods were investigated by two observers. In sixteen of the participants, MRI findings were compared with findings on panoramic radiographs. Decisive roots were palatal in upper third molars and distal in lower third molars. Fifty-seven per cent of upper third molars were not assessable on the radiograph, while 96.9% were on MRI. Upper third molars were more difficult to evaluate on radiographs than on MRI (p < .001). Lower third molars were equally assessable on both imaging techniques (93.8% MRI, 98.4% radiograph), with no difference in level of difficulty (p = .375). Inter- and intra-observer agreement for evaluation was higher in MRI than in radiographs. In both imaging techniques lower third molars showed greater inter- and intra-observer agreement compared to upper third molars. MR images in the sagittal plane proved to be essential for staging. In age estimation, 3T MRI of third molars could be valuable. Some considerations are, however, necessary to transfer known staging methods to this 3D technique.

  7. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  8. Mathematical background of Parrondo's paradox

    NASA Astrophysics Data System (ADS)

    Behrends, Ehrhard

    2004-05-01

    Parrondo's paradox states that there are losing gambling games which, when being combined stochastically or in a suitable deterministic way, give rise to winning games. Here we investigate the probabilistic background. We show how the properties of the equilibrium distributions of the Markov chains under consideration give rise to the paradoxical behavior, and we provide methods how to find the best a priori strategies.

  9. The Disease Burden of Childhood Adversities in Adults: A Population-Based Study

    ERIC Educational Resources Information Center

    Cuijpers, Pim; Smit, Filip; Unger, Froukje; Stikkelbroek, Yvonne; ten Have, Margreet; de Graaf, Ron

    2011-01-01

    Objectives: There is much evidence showing that childhood adversities have considerable effects on the mental and physical health of adults. It could be assumed therefore, that the disease burden of childhood adversities is high. It has not yet been examined, however, whether this is true. Method: We used data of a large representative sample (N =…

  10. Impact of prehistoric cooking practices on paleoenvironmental proxies in shell midden constituents

    NASA Astrophysics Data System (ADS)

    Müller, Peter; Staudigel, Philip; Murray, Sean T.; Westphal, Hildegard; Swart, Peter K.

    2016-04-01

    Paleoenvironmental proxy records such as oxygen isotopes of calcareous skeletal structures like fish otoliths or mollusk shells provide highest-resolution information about environmental conditions experienced by the organism. Accumulations of such skeletal structures by ancient coastal populations in so called "shell midden" deposits provide us with sub-seasonally resolved paleoclimate records covering time spans up to several millennia. Given their high temporal resolution, these deposits are increasingly used for paleoclimate reconstructions and complement our understanding of ancient climate changes. However, gathered as comestibles, most of these skeletal remains were subject to prehistoric cooking methods prior to deposition. The associated alteration of the chemical proxy signatures as well as the subsequent error for paleoenvironmental reconstructions remained almost entirely neglected so far. Here, we present clumped isotope, conventional oxygen and carbon isotopes as well as element:Ca ratios measured in modern bivalve shells after exposing them to different prehistoric cooking methods. Our data show that most cooking methods considerably alter commonly used paleoclimate proxy systems which can lead to substantial misinterpretations of ancient climate conditions. Since the magnitude of chemical alteration is not distinguishable from natural temperature variability in most coastal settings, the alteration of shell midden constituents by prehistoric cooking remains likely unnoticed in most cases. Thus, depending on the cooking method, pre-depositional heating might have introduced considerable errors into previous paleoclimate studies. However, our data also show that clumped isotope thermometry represents a suitable diagnostic tool to detect such pre-depositional cooking events and also allows differentiating between the most commonly applied prehistoric cooking methods.

  11. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    NASA Astrophysics Data System (ADS)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  12. 78 FR 62426 - Use of Differential Income Stream as an Application of the Income Method and as a Consideration...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... Differential Income Stream as an Application of the Income Method and as a Consideration in Assessing the Best Method; Correction AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Correcting amendment... method in connection with a cost sharing arrangement and as a specified application of the income method...

  13. A comparison of electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry for flow measurements

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Stricker, J.

    1985-01-01

    Electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry are compared as methods for the accurate measurement of refractive index and density change distributions of phase objects. Experimental results are presented to show that the two methods have comparable accuracy for measuring the first derivative of the interferometric fringe shift. The phase object for the measurements is a large crystal of KD*P, whose refractive index distribution can be changed accurately and repeatably for the comparison. Although the refractive index change causes only about one interferometric fringe shift over the entire crystal, the derivative shows considerable detail for the comparison. As electronic phase measurement methods, both methods are very accurate and are intrinsically compatible with computer controlled readout and data processing. Heterodyne moire is relatively inexpensive and has high variable sensitivity. Heterodyne holographic interferometry is better developed, and can be used with poor quality optical access to the experiment.

  14. Mean grain size detection of DP590 steel plate using a corrected method with electromagnetic acoustic resonance.

    PubMed

    Wang, Bin; Wang, Xiaokai; Hua, Lin; Li, Juanjuan; Xiang, Qing

    2017-04-01

    Electromagnetic acoustic resonance (EMAR) is a considerable method to determine the mean grain size of the metal material with a high precision. The basic ultrasonic attenuation theory used for the mean grain size detection of EMAR is come from the single phase theory. In this paper, the EMAR testing was carried out based on the ultrasonic attenuation theory. The detection results show that the double peaks phenomenon occurs in the EMAR testing of DP590 steel plate. The dual phase structure of DP590 steel is the inducement of the double peaks phenomenon in the EMAR testing. In reaction to the phenomenon, a corrected method with EMAR was put forward to detect the mean grain size of dual phase steel. Compared with the traditional attenuation evaluation method and the uncorrected method with EMAR, the corrected method with EMAR shows great effectiveness and superiority for the mean grain size detection of DP590 steel plate. Copyright © 2016. Published by Elsevier B.V.

  15. How to choose methods for lake greenhouse gas flux measurements?

    NASA Astrophysics Data System (ADS)

    Bastviken, David

    2017-04-01

    Lake greenhouse gas (GHG) fluxes are increasingly recognized as important for lake ecosystems as well as for large scale carbon and GHG budgets. However, many of our flux estimates are uncertain and it can be discussed if the presently available data is representative for the systems studied or not. Data are also very limited for some important flux pathways. Hence, many ongoing efforts try to better constrain fluxes and understand flux regulation. A fundamental challenge towards improved knowledge and when starting new studies is what methods to choose. A variety of approaches to measure aquatic GHG exchange is used and data from different methods and methodological approaches have often been treated as equally valid to create large datasets for extrapolations and syntheses. However, data from different approaches may cover different flux pathways or spatio-temporal domains and are thus not always comparable. Method inter-comparisons and critical method evaluations addressing these issues are rare. Emerging efforts to organize systematic multi-lake monitoring networks for GHG fluxes leads to method choices that may set the foundation for decades of data generation and therefore require fundamental evaluation of different approaches. The method choices do not only regard the equipment but also for example consideration of overall measurement design and field approaches, relevant spatial and temporal resolution for different flux components, and accessory variables to measure. In addition, consideration of how to design monitoring approaches being affordable, suitable for widespread (global) use, and comparable across regions is needed. Inspired by discussions with Prof. Dr. Cristian Blodau during the EGU General Assembly 2016, this presentation aims to (1) illustrate fundamental pros and cons for a number of common methods, (2) show how common methodological approaches originally adapted for other environments can be improved for lake flux measurements, (3) suggest how consideration of spatio-temporal dimensions of flux variability can lead to more optimized approaches, and (4) highlight possibilities of efficient ways forward including low-cost technologies that has potential for world-wide use.

  16. Application of geometric algebra for the description of polymer conformations.

    PubMed

    Chys, Pieter

    2008-03-14

    In this paper a Clifford algebra-based method is applied to calculate polymer chain conformations. The approach enables the calculation of the position of an atom in space with the knowledge of the bond length (l), valence angle (theta), and rotation angle (phi) of each of the preceding bonds in the chain. Hence, the set of geometrical parameters {l(i),theta(i),phi(i)} yields all the position coordinates p(i) of the main chain atoms. Moreover, the method allows the calculation of side chain conformations and the computation of rotations of chain segments. With these features it is, in principle, possible to generate conformations of any type of chemical structure. This method is proposed as an alternative for the classical approach by matrix algebra. It is more straightforward and its final symbolic representation considerably simpler than that of matrix algebra. Approaches for realistic modeling by means of incorporation of energetic considerations can be combined with it. This article, however, is entirely focused at showing the suitable mathematical framework on which further developments and applications can be built.

  17. 76 FR 80249 - Use of Differential Income Stream as a Consideration in Assessing the Best Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-23

    ... Differential Income Stream as a Consideration in Assessing the Best Method AGENCY: Internal Revenue Service... method in connection with a cost sharing arrangement. The text of these temporary regulations also serves... unreasonable positions in applying the income method by using relatively low licensing discount rates, and...

  18. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  19. Pretrichodermamides D-F from a Marine Algicolous Fungus Penicillium sp. KMM 4672.

    PubMed

    Yurchenko, Anton N; Smetanina, Olga F; Ivanets, Elena V; Kalinovsky, Anatoly I; Khudyakova, Yuliya V; Kirichuk, Natalya N; Popov, Roman S; Bokemeyer, Carsten; von Amsberg, Gunhild; Chingizova, Ekaterina A; Afiyatullov, Shamil Sh; Dyshlovoy, Sergey A

    2016-06-27

    Three new epidithiodiketopiperazines pretrichodermamides D-F (1-3), together with the known N-methylpretrichodermamide B (4) and pretrichodermamide С (5), were isolated from the lipophilic extract of the marine algae-derived fungus Penicillium sp. KMM 4672. The structures of compounds 1-5 were determined based on spectroscopic methods. The absolute configuration of pretrichodermamide D (1) was established by a combination of modified Mosher's method, NOESY data, and biogenetic considerations. N-Methylpretrichodermamide B (5) showed strong cytotoxicity against 22Rv1 human prostate cancer cells resistant to androgen receptor targeted therapies.

  20. Surgical body modification and altruistic individualism: a case for cyborg ethics and methods.

    PubMed

    Frank, Arthur W

    2003-12-01

    Three cases of pediatric surgical body modification--limb lengthening, normalization of genitalia, and craniofacial surgery--are considered through the moral language used by those who experience these surgeries. This language has been described as altruistic individualism. Decision making remains individualist, but it also shows considerable concern for others; egoism is complementary with altruism. The altruistic individualist is one of many incompatible identities that are predicted and described by the figure of the cyborg. Cyborgs suggest both ethics and qualitative methods appropriate to surgically shaped children.

  1. Solvent electronic polarization effects on a charge transfer excitation studied by the mean-field QM/MM method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, Hiroshi; Elements Strategy Initiative for Catalysts and Batteries, Kyoto University, Kyoto 615-8245

    2015-12-31

    Electronic polarization effects of a medium can have a significant impact on a chemical reaction in condensed phases. We discuss the effects on the charge transfer excitation of a chromophore, N,N-dimethyl-4-nitroaniline, in various solvents using the mean-field QM/MM method with a polarizable force field. The results show that the explicit consideration of the solvent electronic polarization effects is important especially for a solvent with a low dielectric constant when we study the solvatochromism of the chromophore.

  2. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  3. Deformed exponentials and portfolio selection

    NASA Astrophysics Data System (ADS)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  4. Applications of chemiluminescence to bacterial analysis

    NASA Technical Reports Server (NTRS)

    Searle, N. D.

    1975-01-01

    Luminol chemiluminescence method for detecting bacteria was based on microbial activation of the oxidation of the luminol monoanion by hydrogen peroxide. Elimination of the prior lysing step, previously used in the chemiluminescence technique, was shown to improve considerably the reproducibility and accuracy of the method in addition to simplifying it. An inexpensive, portable photomultiplier detector was used to measure the maximum light intensity produced when the sample is added to the reagent. Studies of cooling tower water show that the luminol chemiluminescence technique can be used to monitor changes in viable cell population both under normal conditions and during chlorine treatment. Good correlation between chemiluminescence and plate counts was also obtained in the analysis of process water used in paper mills. This method showed good potential for monitoring the viable bacteria populations in activated sludge used in waste treatment plants to digest organic matter.

  5. Optimization of microwave digestion for mercury determination in marine biological samples by cold vapour atomic absorption spectrometry.

    PubMed

    Cardellicchio, Nicola; Di Leo, Antonella; Giandomenico, Santina; Santoro, Stefania

    2006-01-01

    Optimization of acid digestion method for mercury determination in marine biological samples (dolphin liver, fish and mussel tissues) using a closed vessel microwave sample preparation is presented. Five digestion procedures with different acid mixtures were investigated: the best results were obtained when the microwave-assisted digestion was based on sample dissolution with HNO3-H2SO4-K2Cr2O7 mixture. A comparison between microwave digestion and conventional reflux digestion shows there are considerable losses of mercury in the open digestion system. The microwave digestion method has been tested satisfactorily using two certified reference materials. Analytical results show a good agreement with certified values. The microwave digestion proved to be a reliable and rapid method for decomposition of biological samples in mercury determination.

  6. Estimation of low back moments from video analysis: a validation study.

    PubMed

    Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H

    2011-09-02

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. The triangle of the urinary bladder in American mink (Mustela vision (Brisson, 1756)).

    PubMed

    Gościcka, D; Krakowiak, E; Kepczyńska, M

    1994-01-01

    60 bladders of American minks were dissected according to conventional method. Biometrical analysis with the use of digital image analysis system was applied to the triangles of the bladders. It was found that these triangles differ both in shape (narrow, broad) and symmetry (considerable asymmetry). The ureteral orifices also showed a variety in shape (five types) and number (double orifices).

  8. Courses of Study for the Preparation of Teachers of Manual Arts. Bulletin, 1918, No. 37

    ERIC Educational Resources Information Center

    Siepert, Albert F.

    1919-01-01

    The study presented in this bulletin shows considerable variation in the subjects included in the curriculum set up for prospective teachers. There is, however, a growing tendency toward a common standard, and the two-year courses are coming to have many elements in common both as to subject matter and methods of procedure. For example, in many…

  9. Non-Invasive Tissue Oxygenation Measurement Systems. Phase 1.

    DTIC Science & Technology

    1995-10-01

    vessels was shown ( in vivo hamster studies) to be a significant factor causing considerable variability in SaO2 in vessels ...shows that trends in blood oxygenation are tracked. The nearly universal applicability and turnkey type measurement capability of pulse oximeters are...approach early in Phase I that might be compatible with laser doppler blood flow measurements. Both methods depend on laser irradiation of the sample

  10. Passive Samplers for Investigations of Air Quality: Method Description, Implementation, and Comparison to Alternative Sampling Methods

    EPA Science Inventory

    This Paper covers the basics of passive sampler design, compares passive samplers to conventional methods of air sampling, and discusses considerations when implementing a passive sampling program. The Paper also discusses field sampling and sample analysis considerations to ensu...

  11. Automated Discrimination Method of Muscular and Subcutaneous Fat Layers Based on Tissue Elasticity

    NASA Astrophysics Data System (ADS)

    Inoue, Masahiro; Fukuda, Osamu; Tsubai, Masayoshi; Muraki, Satoshi; Okumura, Hiroshi; Arai, Kohei

    Balance between human body composition, e.g. bones, muscles, and fat, is a major and basic indicator of personal health. Body composition analysis using ultrasound has been developed rapidly. However, interpretation of echo signal is conducted manually, and accuracy and confidence in interpretation requires experience. This paper proposes an automated discrimination method of tissue boundaries for measuring the thickness of subcutaneous fat and muscular layers. A portable one-dimensional ultrasound device was used in this study. The proposed method discriminated tissue boundaries based on tissue elasticity. Validity of the proposed method was evaluated in twenty-one subjects (twelve women, nine men; aged 20-70 yr) at three anatomical sites. Experimental results show that the proposed method can achieve considerably high discrimination performance.

  12. A Comparison of Two Path Planners for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  13. Mean-Reverting Portfolio With Budget Constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  14. Critical considerations for the application of environmental DNA methods to detect aquatic species

    USGS Publications Warehouse

    Goldberg, Caren S.; Turner, Cameron R.; Deiner, Kristy; Klymus, Katy E.; Thomsen, Philip Francis; Murphy, Melanie A.; Spear, Stephen F.; McKee, Anna; Oyler-McCance, Sara J.; Cornman, Robert S.; Laramie, Matthew B.; Mahon, Andrew R.; Lance, Richard F.; Pilliod, David S.; Strickler, Katherine M.; Waits, Lisette P.; Fremier, Alexander K.; Takahara, Teruhiko; Herder, Jelger E.; Taberlet, Pierre

    2016-01-01

    Species detection using environmental DNA (eDNA) has tremendous potential for contributing to the understanding of the ecology and conservation of aquatic species. Detecting species using eDNA methods, rather than directly sampling the organisms, can reduce impacts on sensitive species and increase the power of field surveys for rare and elusive species. The sensitivity of eDNA methods, however, requires a heightened awareness and attention to quality assurance and quality control protocols. Additionally, the interpretation of eDNA data demands careful consideration of multiple factors. As eDNA methods have grown in application, diverse approaches have been implemented to address these issues. With interest in eDNA continuing to expand, supportive guidelines for undertaking eDNA studies are greatly needed.Environmental DNA researchers from around the world have collaborated to produce this set of guidelines and considerations for implementing eDNA methods to detect aquatic macroorganisms.Critical considerations for study design include preventing contamination in the field and the laboratory, choosing appropriate sample analysis methods, validating assays, testing for sample inhibition and following minimum reporting guidelines. Critical considerations for inference include temporal and spatial processes, limits of correlation of eDNA with abundance, uncertainty of positive and negative results, and potential sources of allochthonous DNA.We present a synthesis of knowledge at this stage for application of this new and powerful detection method.

  15. [Evaluation of the nutrition mode in children during the pubertal period with BMI < or = 5 percentile in the city of Szczecin].

    PubMed

    Goluch-Koniuszy, Zuzanna

    2010-01-01

    This research was aimed at evaluation of the method of nutrition in the children aged 13 during the period of pubertal spurt who had their body mass, body weight and this values led to calculation of BMI indicator which was related to centile distribution of children from Warszawa. From the group 1464 children selected 79 persons (5.4% the whole of investigated) with BMI < or = 5 percentile with underweight and considerable underweight. Their menus of three chosen at random weekdays were obtained. Analysis of the nutrition method of children with underweight and considerable underweight showed low energy value of the diet, cellulose, mineral components (K, Ca, Mg) also liquids deficiency at simultaneously occurrent the general and animal protein, the fat, the cholesterol, mineral components (Na, P, Fe, Cu, Zn), vitamins A, C, E (girls) and from the group B. The children have undergone a special pro health education in the form "live" workshop.

  16. Credibilistic multi-period portfolio optimization based on scenario tree

    NASA Astrophysics Data System (ADS)

    Mohebbi, Negin; Najafi, Amir Abbas

    2018-02-01

    In this paper, we consider a multi-period fuzzy portfolio optimization model with considering transaction costs and the possibility of risk-free investment. We formulate a bi-objective mean-VaR portfolio selection model based on the integration of fuzzy credibility theory and scenario tree in order to dealing with the markets uncertainty. The scenario tree is also a proper method for modeling multi-period portfolio problems since the length and continuity of their horizon. We take the return and risk as well cardinality, threshold, class, and liquidity constraints into consideration for further compliance of the model with reality. Then, an interactive dynamic programming method, which is based on a two-phase fuzzy interactive approach, is employed to solve the proposed model. In order to verify the proposed model, we present an empirical application in NYSE under different circumstances. The results show that the consideration of data uncertainty and other real-world assumptions lead to more practical and efficient solutions.

  17. The Principle-Based Method of Practical Ethics.

    PubMed

    Spielthenner, Georg

    2017-09-01

    This paper is about the methodology of doing practical ethics. There is a variety of methods employed in ethics. One of them is the principle-based approach, which has an established place in ethical reasoning. In everyday life, we often judge the rightness and wrongness of actions by their conformity to principles, and the appeal to principles plays a significant role in practical ethics, too. In this paper, I try to provide a better understanding of the nature of principle-based reasoning. To accomplish this, I show in the first section that these principles can be applied to cases in a meaningful and sufficiently precise way. The second section discusses the question how relevant applying principles is to the resolution of ethical issues. This depends on their nature. I argue that the principles under consideration in this paper should be interpreted as presumptive principles and I conclude that although they cannot be expected to bear the weight of definitely resolving ethical problems, these principles can nevertheless play a considerable role in ethical research.

  18. 24 h Accelerometry: impact of sleep-screening methods on estimates of sedentary behaviour and physical activity while awake.

    PubMed

    Meredith-Jones, Kim; Williams, Sheila; Galland, Barbara; Kennedy, Gavin; Taylor, Rachael

    2016-01-01

    Although accelerometers can assess sleep and activity over 24 h, sleep data must be removed before physical activity and sedentary time can be examined appropriately. We compared the effect of 6 different sleep-scoring rules on physical activity and sedentary time. Activity and sleep were obtained by accelerometry (ActiGraph GT3X) over 7 days in 291 children (51.3% overweight or obese) aged 4-8.9 years. Three methods removed sleep using individualised time filters and two methods applied standard time filters to remove sleep each day (9 pm-6 am, 12 am-6 am). The final method did not remove sleep but simply defined non-wear as at least 60 min of consecutive zeros over the 24-h period. Different methods of removing sleep from 24-h data markedly affect estimates of sedentary time, yielding values ranging from 556 to 1145 min/day. Estimates of non-wear time (33-193 min), wear time (736-1337 min) and counts per minute (384-658) also showed considerable variation. By contrast, estimates of moderate-to-vigorous activity (MVPA) were similar, varying by less than 1 min/day. Different scoring methods to remove sleep from 24-h accelerometry data do not affect measures of MVPA, whereas estimates of counts per minute and sedentary time depend considerably on which technique is used.

  19. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  20. Segmentation of Vasculature from Fluorescently Labeled Endothelial Cells in Multi-Photon Microscopy Images.

    PubMed

    Bates, Russell; Irving, Benjamin; Markelc, Bostjan; Kaeppler, Jakob; Brown, Graham; Muschel, Ruth J; Brady, Sir Michael; Grau, Vicente; Schnabel, Julia A

    2017-08-09

    Vasculature is known to be of key biological significance, especially in the study of tumors. As such, considerable effort has been focused on the automated segmentation of vasculature in medical and pre-clinical images. The majority of vascular segmentation methods focus on bloodpool labeling methods, however, particularly in the study of tumors it is of particular interest to be able to visualize both perfused and non-perfused vasculature. Imaging vasculature by highlighting the endothelium provides a way to separate the morphology of vasculature from the potentially confounding factor of perfusion. Here we present a method for the segmentation of tumor vasculature in 3D fluorescence microscopy images using signals from the endothelial and surrounding cells. We show that our method can provide complete and semantically meaningful segmentations of complex vasculature using a supervoxel-Markov Random Field approach. We show that in terms of extracting meaningful segmentations of the vasculature, our method out-performs both a state-ofthe- art method, specific to these data, as well as more classical vasculature segmentation methods.

  1. Phase transition of a new lattice hydrodynamic model with consideration of on-ramp and off-ramp

    NASA Astrophysics Data System (ADS)

    Zhang, Geng; Sun, Di-hua; Zhao, Min

    2018-01-01

    A new traffic lattice hydrodynamic model with consideration of on-ramp and off-ramp is proposed in this paper. The influence of on-ramp and off-ramp on the stability of the main road is uncovered by theoretical analysis and computer simulation. Through linear stability theory, the neutral stability condition of the new model is obtained and the results show that the unstable region in the phase diagram is enlarged by considering the on-ramp effect but shrunk with consideration of the off-ramp effect. The mKdV equation near the critical point is derived via nonlinear reductive perturbation method and the occurrence of traffic jamming transition can be described by the kink-antikink soliton solution of the mKdV equation. From the simulation results of space-time evolution of traffic density waves, it is shown that the on-ramp can worsen the traffic stability of the main road but off-ramp is positive in stabilizing the traffic flow of the main road.

  2. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  3. Earth Observatory Satellite system definition study. Report no. 4: Management approach recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A management approach for the Earth Observatory Satellite (EOS) which will meet the challenge of a constrained cost environment is presented. Areas of consideration are contracting techniques, test philosophy, reliability and quality assurance requirements, commonality options, and documentation and control requirements. The various functional areas which were examined for cost reduction possibilities are identified. The recommended management approach is developed to show the primary and alternative methods.

  4. Phosphate interference during in situ treatment for arsenic in groundwater.

    PubMed

    Brunsting, Joseph H; McBean, Edward A

    2014-01-01

    Contamination of groundwater by arsenic is a problem in many areas of the world, particularly in West Bengal (India) and Bangladesh, where reducing conditions in groundwater are the cause. In situ treatment is a novel approach wherein, by introduction of dissolved oxygen (DO), advantages over other treatment methods can be achieved through simplicity, not using chemicals, and not requiring disposal of arsenic-rich wastes. A lab-scale test of in situ treatment by air sparging, using a solution with approximately 5.3 mg L(-1) ferrous iron and 200 μg L(-1) arsenate, showed removal of arsenate in the range of 59%. A significant obstacle exists, however, due to the interference of phosphate since phosphate competes for adsorption sites on oxidized iron precipitates. A lab-scale test including 0.5 mg L(-1) phosphate showed negligible removal of arsenate. In situ treatment by air sparging demonstrates considerable promise for removal of arsenic from groundwater where iron is present in considerable quantities and phosphates are low.

  5. Spectroscopic investigation of herpes simplex viruses infected cells and their response to antiviral therapy

    NASA Astrophysics Data System (ADS)

    Erukhimovitch, Vitaly; Talyshinsky, Marina; Souprun, Yelena; Huleihel, Mahmoud

    2006-07-01

    In the present study, we used microscopic Fourier transform infrared spectroscopy (FTIR) to evaluate the antiviral activity of known antiviral agents against herpes viruses. The antiviral activity of Caffeic acid phenethyl ester (CAPE) (which is an active compound of propolis) against herpes simplex type 1 and 2 was examined in cell culture. The advantage of microscopic FTIR spectroscopy over conventional FTIR spectroscopy is that it facilitates inspection of restricted regions of cell culture or tissue. Our results showed significant spectral differences at early stages of infection between infected and non-infected cells, and between infected cells treated with the used antiviral agent and those not treated. In infected cells, there was a considerable increase in phosphate levels. Our results show that treatment with used antiviral agent considerably abolish the spectral changes induced by the viral infection. In addition, it is possible to track by FTIR microscopy method the deferential effect of various doses of the drug.

  6. Preliminary geological investigation of AIS data at Mary Kathleen, Queensland, Australia

    NASA Technical Reports Server (NTRS)

    Huntington, J. F.; Green, A. A.; Craig, M. D.; Cocks, T. D.

    1986-01-01

    The Airborne Imaging Spectrometer (AIS) was flown over granitic, volcanic, and calc-silicate terrain around the Mary Kathleen Uranium Mine in Queensland, in a test of its mineralocial mapping capabilities. An analysis strategy and restoration and enhancement techniques were developed to process the 128 band AIS data. A preliminary analysis of one of three AIS flight lines shows that the data contains considerable spectral variation but that it is also contaminated by second-order leakage of radiation from the near-infrared region. This makes the recognition of expected spectral absorption shapes very difficult. The effect appears worst in terrains containing considerable vegetation. Techniques that try to predict this supplementary radiation coupled with the log residual analytical technique show that expected mineral absorption spectra can be derived. The techniques suggest that with additional refinement correction procedures, the Australian AIS data may be revised. Application of the log residual analysis method has proved very successful on the cuprite, Nevada data set, and for highlighting the alunite, linite, and SiOH mineralogy.

  7. Spatio-temporal scaling effects on longshore sediment transport pattern along the nearshore zone

    NASA Astrophysics Data System (ADS)

    Khorram, Saeed; Ergil, Mustafa

    2018-03-01

    A measure of uncertainties, entropy has been employed in such different applications as coastal engineering probability inferences. Entropy sediment transport integration theories present novel visions in coastal analyses/modeling the application and development of which are still far-reaching. Effort has been made in the present paper to propose a method that needs an entropy-power index for spatio-temporal patterns analyses. Results have shown that the index is suitable for marine/hydrological ecosystem components analyses based on a beach area case study. The method makes use of six Makran Coastal monthly data (1970-2015) and studies variables such as spatio-temporal patterns, LSTR (long-shore sediment transport rate), wind speed, and wave height all of which are time-dependent and play considerable roles in terrestrial coastal investigations; the mentioned variables show meaningful spatio-temporal variability most of the time, but explanation of their combined performance is not easy. Accordingly, the use of an entropy-power index can show considerable signals that facilitate the evaluation of water resources and will provide an insight regarding hydrological parameters' interactions at scales as large as beach areas. Results have revealed that an STDDPI (entropy based spatio-temporal disorder dynamics power index) can simulate wave, long-shore sediment transport rate, and wind when granulometry, concentration, and flow conditions vary.

  8. Rotational response of suspended particles to turbulent flow: laboratory and numerical synthesis

    NASA Astrophysics Data System (ADS)

    Variano, Evan; Zhao, Lihao; Byron, Margaret; Bellani, Gabriele; Tao, Yiheng; Andersson, Helge

    2014-11-01

    Using laboratory and DNS measurements, we consider how aspherical and inertial particles suspended in a turbulent flow act to ``filter'' the fluid-phase vorticity. We use three approaches to predict the magnitude and structure of this filter. The first approach is based on Buckingham's Pi theorem, which shows a clear result for the relationship between filter strength and particle aspect ratio. Results are less clear for the dependence of filter strength on Stokes number; we briefly discuss some issues in the proper definition of Stokes number for use in this context. The second approach to predicting filter strength is based on a consideration of vorticity and enstrophy spectra in the fluid phase. This method has a useful feature: it can be used to predict the filter a priori, without need for measurements as input. We compare the results of this approach to measurements as a method of validation. The third and final approach to predicting filter strength is from the consideration of torques experienced by particles, and how the ``angular slip'' or ``spin slip'' evolves in an unsteady flow. We show results from our DNS that indicate different flow conditions in which the spin slip is more or less important in setting the particle rotation dynamics. Collaboration made possible by the Peder Sather Center.

  9. Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples

    PubMed Central

    2012-01-01

    Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466

  10. Acquisition of Robotic Giant-swing Motion Using Reinforcement Learning and Its Consideration of Motion Forms

    NASA Astrophysics Data System (ADS)

    Sakai, Naoki; Kawabe, Naoto; Hara, Masayuki; Toyoda, Nozomi; Yabuta, Tetsuro

    This paper argues how a compact humanoid robot can acquire a giant-swing motion without any robotic models by using Q-Learning method. Generally, it is widely said that Q-Learning is not appropriated for learning dynamic motions because Markov property is not necessarily guaranteed during the dynamic task. However, we tried to solve this problem by embedding the angular velocity state into state definition and averaging Q-Learning method to reduce dynamic effects, although there remain non-Markov effects in the learning results. The result shows how the robot can acquire a giant-swing motion by using Q-Learning algorithm. The successful acquired motions are analyzed in the view point of dynamics in order to realize a functionally giant-swing motion. Finally, the result shows how this method can avoid the stagnant action loop at around the bottom of the horizontal bar during the early stage of giant-swing motion.

  11. Thermal quantum time-correlation functions from classical-like dynamics

    NASA Astrophysics Data System (ADS)

    Hele, Timothy J. H.

    2017-07-01

    Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.

  12. The antimicrobial activity of probiotic bacteria Escherichia coli isolated from different natural sources against hemorrhagic E. coli O157:H7.

    PubMed

    Karimi, Sahar; Azizi, Fatemeh; Nayeb-Aghaee, Mohammad; Mahmoodnia, Leila

    2018-03-01

    Diarrheal diseases have been seen in all geographical areas throughout the world. Therefore, considering treatment, could be deemed a necessary action. The aim of this study was to determine the antimicrobial effect of probiotic bacterial strains isolated from different natural sources against 2 pathotypes of pathogenic E. coli. This cross-sectional study of Martyr Chamran University of Ahvaz was carried out from December 2013 to July 2014. A total of 13 probiotic colonies isolated from 20 samples of traditional dairy products including (yogurt, cheese, milk) and 20 samples of vegetables including carrots and cabbages (red and white) of which 5 isolates were selected to evaluate the antimicrobial effect against 2 Escherichia coli pathotypes, randomly. Antimicrobial effect was evaluated using two methods: disk diffusion and well diffusion tests and measuring growth inhibition zones of probiotics against 2 pathotypes of pathogenic E. coli. Obtained results showed growth inhibition effects of all 5 probiotic strains against Escherichia coli pathotypes in both used methods. All selected strains showed considerable antimicrobial effect on Escherichia coli O157:H7 strain, but had no inhibitory effect against Enterohemorrhagic Escherichia coli. This study demonstrated considerable antimicrobial effect against E. coli O157:H7 strain. Due to this, characteristic and similar antimicrobial effects of probiotics bacteria, increasing use of the probiotics as a natural and modern method for prevention of different diseases is recommended.

  13. Acoustic window planning for ultrasound acquisition.

    PubMed

    Göbl, Rüdiger; Virga, Salvatore; Rackerseder, Julia; Frisch, Benjamin; Navab, Nassir; Hennersperger, Christoph

    2017-06-01

    Autonomous robotic ultrasound has recently gained considerable interest, especially for collaborative applications. Existing methods for acquisition trajectory planning are solely based on geometrical considerations, such as the pose of the transducer with respect to the patient surface. This work aims at establishing acoustic window planning to enable autonomous ultrasound acquisitions of anatomies with restricted acoustic windows, such as the liver or the heart. We propose a fully automatic approach for the planning of acquisition trajectories, which only requires information about the target region as well as existing tomographic imaging data, such as X-ray computed tomography. The framework integrates both geometrical and physics-based constraints to estimate the best ultrasound acquisition trajectories with respect to the available acoustic windows. We evaluate the developed method using virtual planning scenarios based on real patient data as well as for real robotic ultrasound acquisitions on a tissue-mimicking phantom. The proposed method yields superior image quality in comparison with a naive planning approach, while maintaining the necessary coverage of the target. We demonstrate that by taking image formation properties into account acquisition planning methods can outperform naive plannings. Furthermore, we show the need for such planning techniques, since naive approaches are not sufficient as they do not take the expected image quality into account.

  14. Comparison of ESI- and APCI-LC-MS/MS methods: A case study of levonorgestrel in human plasma.

    PubMed

    Wang, Rulin; Zhang, Lin; Zhang, Zunjian; Tian, Yuan

    2016-12-01

    Electrospray ionization (ESI) and atmospheric pressure chemical ionization (APCI) techniques for liquid chromatography-tandem mass spectrometry (LC-MS/MS) determination of levonorgestrel were evaluated. In consideration of difference in ionization mechanism, the two ionization sources were compared in terms of LC conditions, MS parameters and performance of method. The sensitivity for detection of levonorgestrel with ESI was 0.25 ng/mL which was lower than 1 ng/mL with APCI. Matrix effects were evaluated for levonorgestrel and canrenone (internal standard, IS) in human plasma, and the results showed that APCI source appeared to be slightly less liable to matrix effect than ESI source. With an overall consideration, ESI was chosen as a better ionization technique for rapid and sensitive quantification of levonorgestrel. The optimized LC-ESI-MS/MS method was validated for a linear range of 0.25-50 ng/mL with a correlation coefficient ≥0.99. The intra- and inter-batch precision and accuracy were within 11.72% and 6.58%, respectively. The application of this method was demonstrated by a bioequivalence study following a single oral administration of 1.5 mg levonorgestrel tablets in 21 Chinese healthy female volunteers.

  15. An Improved Evidential-IOWA Sensor Data Fusion Approach in Fault Diagnosis

    PubMed Central

    Zhou, Deyun; Zhuang, Miaoyan; Fang, Xueyi; Xie, Chunhe

    2017-01-01

    As an important tool of information fusion, Dempster–Shafer evidence theory is widely applied in handling the uncertain information in fault diagnosis. However, an incorrect result may be obtained if the combined evidence is highly conflicting, which may leads to failure in locating the fault. To deal with the problem, an improved evidential-Induced Ordered Weighted Averaging (IOWA) sensor data fusion approach is proposed in the frame of Dempster–Shafer evidence theory. In the new method, the IOWA operator is used to determine the weight of different sensor data source, while determining the parameter of the IOWA, both the distance of evidence and the belief entropy are taken into consideration. First, based on the global distance of evidence and the global belief entropy, the α value of IOWA is obtained. Simultaneously, a weight vector is given based on the maximum entropy method model. Then, according to IOWA operator, the evidence are modified before applying the Dempster’s combination rule. The proposed method has a better performance in conflict management and fault diagnosis due to the fact that the information volume of each evidence is taken into consideration. A numerical example and a case study in fault diagnosis are presented to show the rationality and efficiency of the proposed method. PMID:28927017

  16. Efficient Voronoi volume estimation for DEM simulations of granular materials under confined conditions

    PubMed Central

    Frenning, Göran

    2015-01-01

    When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975

  17. Compound analysis via graph kernels incorporating chirality.

    PubMed

    Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya

    2010-12-01

    High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.

  18. The any particle molecular orbital grid-based Hartree-Fock (APMO-GBHF) approach

    NASA Astrophysics Data System (ADS)

    Posada, Edwin; Moncada, Félix; Reyes, Andrés

    2018-02-01

    The any particle molecular orbital grid-based Hartree-Fock approach (APMO-GBHF) is proposed as an initial step to perform multi-component post-Hartree-Fock, explicitly correlated, and density functional theory methods without basis set errors. The method has been applied to a number of electronic and multi-species molecular systems. Results of these calculations show that the APMO-GBHF total energies are comparable with those obtained at the APMO-HF complete basis set limit. In addition, results reveal a considerable improvement in the description of the nuclear cusps of electronic and non-electronic densities.

  19. Variational method of determining effective moduli of polycrystals: (A) hexagonal symmetry, (B) trigonal symmetry

    USGS Publications Warehouse

    Peselnick, L.; Meister, R.

    1965-01-01

    Variational principles of anisotropic elasticity have been applied to aggregates of randomly oriented pure-phase polycrystals having hexagonal symmetry and trigonal symmetry. The bounds of the effective elastic moduli obtained in this way show a considerable improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be in most cases a good approximation when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1965 The American Institute of Physics.

  20. Method and graphs for the evaluation of air-induction systems

    NASA Technical Reports Server (NTRS)

    Brajnikoff, George B

    1953-01-01

    Graphs have been developed for rapid evaluation of air-induction systems from considerations of their aerodynamic-performance parameters in combination with power-plant characteristics. The graphs cover the range of supersonic Mach numbers to 3.0. Examples are presented for an air-induction system and engine combination of two Mach numbers and two altitudes in order to illustrate the method and application of the graphs. The examples show that jet-engine characteristics impose restrictions on the use of fixed inlets if the maximum net thrusts are to be realized at all flight conditions. (author)

  1. Detection of Candida species by nested PCR method in one-humped camels (Camelus dromedarius).

    PubMed

    Parin, Ugur; Erbas, Goksel; Kirkan, Sukru; Savasan, Serap; Tugba Yuksel, H; Balat, Gamze

    2018-02-01

    Systemic fungal diseases are the infections caused by false treatment protocols and generally are not taken into consideration especially in the veterinary field. One-humped camels are found in the western side of the Aegean region of our country and bred for wrestling. The aim of this study is the application of diagnosing systemic fungi infection from camel blood samples by the PCR method. In this study, specific primers for DNA topoisomerase II gene sequences were used. As a result, a systemic fungal infection was detected by the nested PCR method from 10 (20%) out of 50 DNA samples taken from camels located on the western side of the Aegean region. In this study, 3 (30%) samples were identified as Candida albicans, 3 (30%) samples were identified as C. glabrata, and 4 (40%) samples were identified as C. parapsilosis. In conclusion, the 20% positive systemic fungal infection rate in one-humped camels observed in the present study showed that the systemic fungal infections are not taken into considerations in veterinary medicine. Further studies are suggested in order to obtain and to maintain extensive data for systemic fungal diseases in our country for one-humped camels.

  2. Boosting compound-protein interaction prediction by deep learning.

    PubMed

    Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng

    2016-11-01

    The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Practical Considerations for Optic Nerve Estimation in Telemedicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnowski, Thomas Paul; Aykac, Deniz; Chaum, Edward

    The projected increase in diabetes in the United States and worldwide has created a need for broad-based, inexpensive screening for diabetic retinopathy (DR), an eye disease which can lead to vision impairment. A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion / anomaly detection is a low-cost way of achieving broad-based screening. In this work we report on the effect of quality estimation on an optic nerve (ON) detection method with a confidence metric. We report on an improvement of the fusion technique using a data set from an ophthalmologists practice then show themore » results of the method as a function of image quality on a set of images from an on-line telemedicine network collected in Spring 2009 and another broad-based screening program. We show that the fusion method, combined with quality estimation processing, can improve detection performance and also provide a method for utilizing a physician-in-the-loop for images that may exceed the capabilities of automated processing.« less

  4. Effect of sampling rate and record length on the determination of stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Brenner, M. J.; Iliff, K. W.; Whitman, R. K.

    1978-01-01

    Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.

  5. Thermal comfort of aeroplane seats: influence of different seat materials and the use of laboratory test methods.

    PubMed

    Bartels, Volkmar T

    2003-07-01

    This study determined the influence of different cover and cushion materials on the thermal comfort of aeroplane seats. Different materials as well as ready made seats were investigated by the physiological laboratory test methods Skin Model and seat comfort tester. Additionally, seat trials with human test subjects were performed in a climatic chamber. Results show that a fabric cover produces a considerably higher sweat transport than leather. A three-dimensional knitted spacer fabric turns out to be the better cushion alternative in comparison to a moulded foam pad. Results from the physiological laboratory test methods nicely correspond to the seat trials with human test subjects.

  6. Cupping - is it reproducible? Experiments about factors determining the vacuum.

    PubMed

    Huber, R; Emerich, M; Braeunig, M

    2011-04-01

    Cupping is a traditional method for treating pain which is investigated nowadays in clinical studies. Because the methods for producing the vacuum vary considerably we tested their reproducibility. In a first set of experiments (study 1) four methods for producing the vacuum (lighter flame 2 cm (LF1), lighter flame 4 cm (LF2), alcohol flame (AF) and mechanical suction with a balloon (BA)) have been compared in 50 trials each. The cupping glass was prepared with an outlet and stop-cock, the vacuum was measured with a pressure-gauge after the cup was set to a soft rubber pad. In a second series of experiments (study 2) we investigated the stability of pressures in 20 consecutive trials in two experienced cupping practitioners and ten beginners using method AF. In study 1 all four methods yielded consistent pressures. Large differences in magnitude were, however, observed between methods (mean pressures -200±30 hPa with LF1, -310±30 hPa with LF2, -560±30 hPa with AF, and -270±16 hPa with BA). With method BA the standard deviation was reduced by a factor 2 compared to the flame methods. In study 2 beginners had considerably more difficulty obtaining a stable pressure yield than advanced cupping practitioners, showing a distinct learning curve before reaching expertise levels after about 10-20 trials. Cupping is reproducible if the exact method is described in detail. Mechanical suction with a balloon has the best reproducibility. Beginners need at least 10-20 trials to produce stable pressures. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Comparing personal insight gains due to consideration of a recent dream and consideration of a recent event using the Ullman and Schredl dream group methods

    PubMed Central

    Edwards, Christopher L.; Malinowski, Josie E.; McGee, Shauna L.; Bennett, Paul D.; Ruby, Perrine M.; Blagrove, Mark T.

    2015-01-01

    There have been reports and claims in the psychotherapeutic literature that the consideration of recent dreams can result in personal realizations and insight. There is theoretical support for these claims from work on rapid eye movement (REM) sleep having a function of the consolidation of emotional memories and the creative formation of connections between new and older memories. To investigate these claims, 11 participants (10 females, one male) reported and considered a recent home dream in a dream discussion group that following the “Appreciating dreams” method of Montague Ullman. The group ran 11 times, each participant attending and participating once. A further nine participants (seven females, two males) reported and considered a recent home dream in a group that followed the “Listening to the dreamer” method of Michael Schredl. The two studies each had a control condition where the participant also reported a recent event, the consideration of which followed the same technique as was followed for the dream report. Outcomes of the discussions were assessed by the participants on the Gains from Dream Interpretation (GDI) scale, and on its counterpart, the Gains from Event Interpretation scale. High ratings on the GDI experiential-insight subscale were reported for both methods, when applied to dreams, and for the Ullman method Exploration-Insight ratings for the dream condition were significantly higher than for the control event condition. In the Ullman method, self-assessment of personal insight due to consideration of dream content was also significantly higher than for the event consideration condition. The findings support the view that benefits can be obtained from the consideration of dream content, in terms of identifying the waking life sources of dream content, and because personal insight may also occur. To investigate the mechanisms for the findings, the studies should be repeated with REM and non-REM dream reports, hypothesizing greater insight from the former. PMID:26150797

  8. Quantitative assessment in thermal image segmentation for artistic objects

    NASA Astrophysics Data System (ADS)

    Yousefi, Bardia; Sfarra, Stefano; Maldague, Xavier P. V.

    2017-07-01

    The application of the thermal and infrared technology in different areas of research is considerably increasing. These applications involve Non-destructive Testing (NDT), Medical analysis (Computer Aid Diagnosis/Detection- CAD), Arts and Archaeology among many others. In the arts and archaeology field, infrared technology provides significant contributions in term of finding defects of possible impaired regions. This has been done through a wide range of different thermographic experiments and infrared methods. The proposed approach here focuses on application of some known factor analysis methods such as standard Non-Negative Matrix Factorization (NMF) optimized by gradient-descent-based multiplicative rules (SNMF1) and standard NMF optimized by Non-negative least squares (NNLS) active-set algorithm (SNMF2) and eigen decomposition approaches such as Principal Component Thermography (PCT), Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT) to obtain the thermal features. On one hand, these methods are usually applied as preprocessing before clustering for the purpose of segmentation of possible defects. On the other hand, a wavelet based data fusion combines the data of each method with PCT to increase the accuracy of the algorithm. The quantitative assessment of these approaches indicates considerable segmentation along with the reasonable computational complexity. It shows the promising performance and demonstrated a confirmation for the outlined properties. In particular, a polychromatic wooden statue and a fresco were analyzed using the above mentioned methods and interesting results were obtained.

  9. Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method

    NASA Astrophysics Data System (ADS)

    Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2015-12-01

    In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.

  10. Some considerations on the use of ecological models to predict species' geographic distributions

    USGS Publications Warehouse

    Peterjohn, B.G.

    2001-01-01

    Peterson (2001) used Genetic Algorithm for Rule-set Prediction (GARP) models to predict distribution patterns from Breeding Bird Survey (BBS) data. Evaluations of these models should consider inherent limitations of BBS data: (1) BBS methods may not sample species and habitats equally; (2) using BBS data for both model development and testing may overlook poor fit of some models; and (3) BBS data may not provide the desired spatial resolution or capture temporal changes in species distributions. The predictive value of GARP models requires additional study, especially comparisons with distribution patterns from independent data sets. When employed at appropriate temporal and geographic scales, GARP models show considerable promise for conservation biology applications but provide limited inferences concerning processes responsible for the observed patterns.

  11. Selection of Instructional Methods and Techniques: The Basic Consideration of Teachers at Secondary School Level

    ERIC Educational Resources Information Center

    Ahmad, Saira Ijaz; Malik, Samina; Irum, Jamila; Zahid, Rabia

    2011-01-01

    The main objective of the study was to identify the instructional methods and techniques used by the secondary school teachers to transfer the instructions to the students and to explore the basic considerations of the teachers about the selection of these instructional methods and techniques. Participants of the study included were 442 teachers…

  12. Ecological Equivalence Assessment Methods: What Trade-Offs between Operationality, Scientific Basis and Comprehensiveness?

    PubMed

    Bezombes, Lucie; Gaucherand, Stéphanie; Kerbiriou, Christian; Reinert, Marie-Eve; Spiegelberger, Thomas

    2017-08-01

    In many countries, biodiversity compensation is required to counterbalance negative impacts of development projects on biodiversity by carrying out ecological measures, called offset when the goal is to reach "no net loss" of biodiversity. One main issue is to ensure that offset gains are equivalent to impact-related losses. Ecological equivalence is assessed with ecological equivalence assessment methods taking into account a range of key considerations that we summarized as ecological, spatial, temporal, and uncertainty. When equivalence assessment methods take into account all considerations, we call them "comprehensive". Equivalence assessment methods should also aim to be science-based and operational, which is challenging. Many equivalence assessment methods have been developed worldwide but none is fully satisfying. In the present study, we examine 13 equivalence assessment methods in order to identify (i) their general structure and (ii) the synergies and trade-offs between equivalence assessment methods characteristics related to operationality, scientific-basis and comprehensiveness (called "challenges" in his paper). We evaluate each equivalence assessment methods on the basis of 12 criteria describing the level of achievement of each challenge. We observe that all equivalence assessment methods share a general structure, with possible improvements in the choice of target biodiversity, the indicators used, the integration of landscape context and the multipliers reflecting time lags and uncertainties. We show that no equivalence assessment methods combines all challenges perfectly. There are trade-offs between and within the challenges: operationality tends to be favored while scientific basis are integrated heterogeneously in equivalence assessment methods development. One way of improving the challenges combination would be the use of offset dedicated data-bases providing scientific feedbacks on previous offset measures.

  13. Prevention and self-management interventions are top priorities for osteoarthritis systematic reviews.

    PubMed

    Jaramillo, Alejandra; Welch, Vivian A; Ueffing, Erin; Gruen, Russell L; Bragge, Peter; Lyddiatt, Anne; Tugwell, Peter

    2013-05-01

    To identify high-priority research questions for osteoarthritis systematic reviews with consideration of health equity and the social determinants of health (SDH). We consulted with experts and conducted a literature search to identify a priority-setting method that could be adapted to address the health equity and SDH. We selected the Global Evidence Mapping priority-setting method, and through consultations and consensus, we adapted the method to meet our objectives. This involves developing an evidence map of the existing systematic reviews on osteoarthritis; conducting one face-to-face workshop with patients and another one with clinicians, researchers, and patients; and conducting an online survey of patients to rank the top 10 research questions. We piloted the adapted method with the Cochrane Musculoskeletal Review Group to set research priorities for osteoarthritis. Our focus was on systematic reviews: we identified 34 high-priority research questions for osteoarthritis systematic reviews. Prevention and self-management interventions, mainly diet and exercise, are top priorities for osteoarthritis systematic reviews. Evaluation against our predefined objectives showed that this method did prioritize SDH (50% of the research questions considered SDH). There were marked gaps: no high-priority topics were identified for access to care until patients had advanced disease-lifestyle changes once the disease was diagnosed. This method was felt feasible if conducted annually. We confirmed the utility of an adapted priority-setting method that is feasible and considers SDH. Further testing of this method is needed to assess whether considerations of health equity are prioritized and involve disadvantaged groups of the population. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Multilabel learning via random label selection for protein subcellular multilocations prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-01-01

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  15. Crowd motion segmentation and behavior recognition fusing streak flow and collectiveness

    NASA Astrophysics Data System (ADS)

    Gao, Mingliang; Jiang, Jun; Shen, Jin; Zou, Guofeng; Fu, Guixia

    2018-04-01

    Crowd motion segmentation and crowd behavior recognition are two hot issues in computer vision. A number of methods have been proposed to tackle these two problems. Among the methods, flow dynamics is utilized to model the crowd motion, with little consideration of collective property. Moreover, the traditional crowd behavior recognition methods treat the local feature and dynamic feature separately and overlook the interconnection of topological and dynamical heterogeneity in complex crowd processes. A crowd motion segmentation method and a crowd behavior recognition method are proposed based on streak flow and crowd collectiveness. The streak flow is adopted to reveal the dynamical property of crowd motion, and the collectiveness is incorporated to reveal the structure property. Experimental results show that the proposed methods improve the crowd motion segmentation accuracy and the crowd recognition rates compared with the state-of-the-art methods.

  16. A Relational Approach to Measuring Competition Among Hospitals

    PubMed Central

    Sohn, Min-Woong

    2002-01-01

    Objective To present a new, relational approach to measuring competition in hospital markets and to compare this relational approach with alternative methods of measuring competition. Data Sources The California Office of Statewide Health Planning and Development patient discharge abstracts and financial disclosure files for 1991. Study Design Patient discharge abstracts for an entire year were used to derive patient flows, which were combined to calculate the extent of overlap in patient pools for each pair of hospitals. This produces a cross-sectional measure of market competition among hospitals. Principal Findings The relational approach produces measures of competition between each and every pair of hospitals in the study sample, allowing us to examine a much more “local” as well as dyadic effect of competition. Preliminary analyses show the following: (1) Hospital markets are smaller than thought. (2) For-profit hospitals received considerably more competition from their neighbors than either nonprofit or government hospitals. (3) The size of a hospital does not matter in the amount of competition received, but the larger hospitals generated significantly more competition than smaller ones. Comparisons of this method to the other methods show considerable differences in identifying competitors, indicating that these methods are not as comparable as previously thought. Conclusion The relational approach measures competition in a more detailed way and allows researchers to conduct more fine-grained analyses of market competition. This approach allows one to model market structure in a manner that goes far beyond the traditional categories of monopoly, oligopoly, and perfect competition. It also opens up an entirely new range of analytic possibilities in examining the effect of competition on hospital performance, price of medical care, changes in the market, technology acquisition, and many other phenomena in the health care field. PMID:12036003

  17. α-Glucosidase enzyme inhibitory effects and ursolic and oleanolic acid contents of fourteen Anatolian Salvia species.

    PubMed

    Kalaycıoğlu, Zeynep; Uzaşçı, Sesil; Dirmenci, Tuncay; Erim, F Bedia

    2018-06-05

    During the last decade, ursolic and oleanolic acids have been of considerable interest because of their α-glucosidase inhibitory activities and potential effects for treatment of type 2 diabetes. A simple and sensitive reversed-phase HPLC method was developed for the simultaneous determination of ursolic acid and oleanolic acid. The optimal mobile phase was selected as 85% acetonitrile solution. The limit of detection of the method for ursolic acid and oleanolic acid were 14 ng mL -1 and 13 ng mL -1 , respectively. The method showed good precision and accuracy with intra-day and inter-day variations of 0.54% and 7.33% for ursolic acid, intra-day and inter-day variations of 0.51% and 5.26% for oleanolic acid, and overall recoveries of 97.8% and 98.5% for ursolic acid and oleanolic acid, respectively. Application of the method to determine the ursolic acid and oleanolic acid contents in the Salvia species revealed both compounds, with varying amounts between 0.21-9.76 mg g -1 ursolic acid and 0.20-12.7 mg g -1 oleanolic acid, respectively, among 14 Salvia species analyzed. Additionally, the plant extracts were analyzed for their inhibitory activities on α-glucosidase. According to the results of this assay, the extracts showed considerable activity on α-glucosidase with IC 50 values from 17.6 to 173 μg mL -1 . A strong negative correlation was detected between the amounts of both acids and IC 50 values of extracts. Anatolian Salvia species have great potential as functional plants in the management of diabetes. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Iron oxide nanotubes synthesized via template-based electrodeposition

    NASA Astrophysics Data System (ADS)

    Lim, Jin-Hee; Min, Seong-Gi; Malkinski, Leszek; Wiley, John B.

    2014-04-01

    Considerable effort has been invested in the development of synthetic methods for the preparation iron oxide nanostructures for applications in nanotechnology. While a variety of structures have been reported, only a few studies have focused on iron oxide nanotubes. Here, we present details on the synthesis and characterization of iron oxide nanotubes along with a proposed mechanism for FeOOH tube formation. The FeOOH nanotubes, fabricated via a template-based electrodeposition method, are found to exhibit a unique inner-surface. Heat treatment of these tubes under oxidizing or reducing atmospheres can produce either hematite (α-Fe2O3) or magnetite (Fe3O4) structures, respectively. Hematite nanotubes are composed of small nanoparticles less than 20 nm in diameter and the magnetization curves and FC-ZFC curves show superparamagnetic properties without the Morin transition. In the case of magnetite nanotubes, which consist of slightly larger nanoparticles, magnetization curves show ferromagnetism with weak coercivity at room temperature, while FC-ZFC curves exhibit the Verwey transition at 125 K.Considerable effort has been invested in the development of synthetic methods for the preparation iron oxide nanostructures for applications in nanotechnology. While a variety of structures have been reported, only a few studies have focused on iron oxide nanotubes. Here, we present details on the synthesis and characterization of iron oxide nanotubes along with a proposed mechanism for FeOOH tube formation. The FeOOH nanotubes, fabricated via a template-based electrodeposition method, are found to exhibit a unique inner-surface. Heat treatment of these tubes under oxidizing or reducing atmospheres can produce either hematite (α-Fe2O3) or magnetite (Fe3O4) structures, respectively. Hematite nanotubes are composed of small nanoparticles less than 20 nm in diameter and the magnetization curves and FC-ZFC curves show superparamagnetic properties without the Morin transition. In the case of magnetite nanotubes, which consist of slightly larger nanoparticles, magnetization curves show ferromagnetism with weak coercivity at room temperature, while FC-ZFC curves exhibit the Verwey transition at 125 K. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr06924a

  19. Using mixed methods effectively in prevention science: designs, procedures, and examples.

    PubMed

    Zhang, Wanqing; Watanabe-Galloway, Shinobu

    2014-10-01

    There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.

  20. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  1. Selection methods regulate evolution of cooperation in digital evolution

    PubMed Central

    Lichocki, Paweł; Floreano, Dario; Keller, Laurent

    2014-01-01

    A key, yet often neglected, component of digital evolution and evolutionary models is the ‘selection method’ which assigns fitness (number of offspring) to individuals based on their performance scores (efficiency in performing tasks). Here, we study with formal analysis and numerical experiments the evolution of cooperation under the five most common selection methods (proportionate, rank, truncation-proportionate, truncation-uniform and tournament). We consider related individuals engaging in a Prisoner's Dilemma game where individuals can either cooperate or defect. A cooperator pays a cost, whereas its partner receives a benefit, which affect their performance scores. These performance scores are translated into fitness by one of the five selection methods. We show that cooperation is positively associated with the relatedness between individuals under all selection methods. By contrast, the change in the performance benefit of cooperation affects the populations’ average level of cooperation only under the proportionate methods. We also demonstrate that the truncation and tournament methods may introduce negative frequency-dependence and lead to the evolution of polymorphic populations. Using the example of the evolution of cooperation, we show that the choice of selection method, though it is often marginalized, can considerably affect the evolutionary dynamics. PMID:24152811

  2. Fiber fault location utilizing traffic signal in optical network.

    PubMed

    Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi

    2013-10-07

    We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.

  3. Community structure in networks

    NASA Astrophysics Data System (ADS)

    Newman, Mark

    2004-03-01

    Many networked systems, including physical, biological, social, and technological networks, appear to contain ``communities'' -- groups of nodes within which connections are dense, but between which they are sparser. The ability to find such communities in an automated fashion could be of considerable use. Communities in a web graph for instance might correspond to sets of web sites dealing with related topics, while communities in a biochemical network or an electronic circuit might correspond to functional units of some kind. We present a number of new methods for community discovery, including methods based on ``betweenness'' measures and methods based on modularity optimization. We also give examples of applications of these methods to both computer-generated and real-world network data, and show how our techniques can be used to shed light on the sometimes dauntingly complex structure of networked systems.

  4. An Improved X-ray Diffraction Method For Cellulose Crystallinity Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, Xiaohui; Bowden, Mark E.; Brown, Elvie E.

    2015-06-01

    We show in this work a modified X-ray diffraction method to determine cellulose crystallinity index (CrI). Nanocrystalline cellulose (NCC) dervided from bleached wood pulp was used as a model substrate. Rietveld refinement was applied with consideration of March-Dollase preferred orientation at the (001) plane. In contrast to most previous methods, three distinct amorphous peaks identified from new model samples which are used to calculate CrI. A 2 theta range from 10° to 75° was found to be more suitable to determine CrI and crystallite structural parameters such as d-spacing and crystallite size. This method enables a more reliable measurement ofmore » CrI of cellulose and may be applicable to other types of cellulose polymorphs.« less

  5. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    PubMed Central

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-01-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082

  6. New clustered regularly interspaced short palindromic repeat locus spacer pair typing method based on the newly incorporated spacer for Salmonella enterica.

    PubMed

    Li, Hao; Li, Peng; Xie, Jing; Yi, Shengjie; Yang, Chaojie; Wang, Jian; Sun, Jichao; Liu, Nan; Wang, Xu; Wu, Zhihao; Wang, Ligui; Hao, Rongzhang; Wang, Yong; Jia, Leili; Li, Kaiqin; Qiu, Shaofu; Song, Hongbin

    2014-08-01

    A clustered regularly interspaced short palindromic repeat (CRISPR) typing method has recently been developed and used for typing and subtyping of Salmonella spp., but it is complicated and labor intensive because it has to analyze all spacers in two CRISPR loci. Here, we developed a more convenient and efficient method, namely, CRISPR locus spacer pair typing (CLSPT), which only needs to analyze the two newly incorporated spacers adjoining the leader array in the two CRISPR loci. We analyzed a CRISPR array of 82 strains belonging to 21 Salmonella serovars isolated from humans in different areas of China by using this new method. We also retrieved the newly incorporated spacers in each CRISPR locus of 537 Salmonella isolates which have definite serotypes in the Pasteur Institute's CRISPR Database to evaluate this method. Our findings showed that this new CLSPT method presents a high level of consistency (kappa = 0.9872, Matthew's correlation coefficient = 0.9712) with the results of traditional serotyping, and thus, it can also be used to predict serotypes of Salmonella spp. Moreover, this new method has a considerable discriminatory power (discriminatory index [DI] = 0.8145), comparable to those of multilocus sequence typing (DI = 0.8088) and conventional CRISPR typing (DI = 0.8684). Because CLSPT only costs about $5 to $10 per isolate, it is a much cheaper and more attractive method for subtyping of Salmonella isolates. In conclusion, this new method will provide considerable advantages over other molecular subtyping methods, and it may become a valuable epidemiologic tool for the surveillance of Salmonella infections. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  7. Light Weight MP3 Watermarking Method for Mobile Terminals

    NASA Astrophysics Data System (ADS)

    Takagi, Koichi; Sakazawa, Shigeyuki; Takishima, Yasuhiro

    This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.

  8. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  9. Evaluation on the Efficiency of Biomass Power Generation Industry in China

    PubMed Central

    Sun, Dong; Guo, Sen

    2014-01-01

    As a developing country with large population, China is facing the problems of energy resource shortage and growing environmental pollution arising from the coal-dominated energy structure. Biomass energy, as a kind of renewable energy with the characteristics of being easy to store and friendly to environment, has become the focus of China's energy development in the future. Affected by the advanced power generation technology and diversified geography environment, the biomass power generation projects show new features in recent years. Hence, it is necessary to evaluate the efficiency of biomass power generation industry by employing proper method with the consideration of new features. In this paper, the regional difference as a new feature of biomass power generation industry is taken into consideration, and the AR model is employed to modify the zero-weight issue when using data envelopment analysis (DEA) method to evaluate the efficiency of biomass power generation industry. 30 biomass power generation enterprises in China are selected as the sample, and the efficiency evaluation is performed. The result can provide some insights into the sustainable development of biomass power generation industry in China. PMID:25093209

  10. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  11. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  12. Transient simulation of hydropower station with consideration of three-dimensional unsteady flow in turbine

    NASA Astrophysics Data System (ADS)

    Huang, W. D.; Fan, H. G.; Chen, N. X.

    2012-11-01

    To study the interaction between the transient flow in pipe and the unsteady turbulent flow in turbine, a coupled model of the transient flow in the pipe and three-dimensional unsteady flow in the turbine is developed based on the method of characteristics and the fluid governing equation in the accelerated rotational relative coordinate. The load-rejection process under the closing of guide vanes of the hydraulic power plant is simulated by the coupled method, the traditional transient simulation method and traditional three-dimensional unsteady flow calculation method respectively and the results are compared. The pressure, unit flux and rotation speed calculated by three methods show a similar change trend. However, because the elastic water hammer in the pipe and the pressure fluctuation in the turbine have been considered in the coupled method, the increase of pressure at spiral inlet is higher and the pressure fluctuation in turbine is stronger.

  13. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    PubMed

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  14. Social Anthropological Considerations on the Predictability and Unpredictability of Community Outcomes

    NASA Astrophysics Data System (ADS)

    Smith, Gregory O.

    This chapter surveys community process in a circumscribed area of central Italy in a comparative effort to show how simple quantitative methods can provide insights into the nature of community constitution. It is evident that individual and psychological processes are rooted in community experience, and in order to have a fuller understanding of the various system levels discussed in this volume, it is valuable also to have some insights into the organizational dynamics of localized communities.

  15. On the contribution of intramolecular zero point energy to the equation of state of solid H2

    NASA Technical Reports Server (NTRS)

    Chandrasekharan, V.; Etters, R. D.

    1978-01-01

    Experimental evidence shows that the internal zero-point energy of the H2 molecule exhibits a relatively strong pressure dependence in the solid as well as changing considerably upon condensation. It is shown that these effects contribute about 6% to the total sublimation energy and to the pressure in the solid state. Methods to modify the ab initio isolated pair potential to account for these environmental effects are discussed.

  16. Identification of the Radiative and Nonradiative Parts of a Wave Field

    NASA Astrophysics Data System (ADS)

    Hoenders, B. J.; Ferwerda, H. A.

    2001-08-01

    We present a method for decomposing a wave field, described by a second-order ordinary differential equation, into a radiative component and a nonradiative one, using a biorthonormal system related to the problem under consideration. We show that it is possible to select a special system such that the wave field is purely radiating. We discuss the differences and analogies with approaches which, unlike our approach, start from the corresponding sources of the field.

  17. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  18. "Natural family planning": effective birth control supported by the Catholic Church.

    PubMed

    Ryder, R E

    1993-09-18

    During 20-22 September Manchester is to host the 1993 follow up to last year's "earth summit" in Rio de Janeiro. At that summit the threat posed by world overpopulation received considerable attention. Catholicism was perceived as opposed to birth control and therefore as a particular threat. This was based on the notion that the only method of birth control approved by the church--natural family planning--is unreliable, unacceptable, and ineffective. In the 20 years since E L Billings and colleagues first described the cervical mucus symptoms associated with ovulation natural family planning has incorporated these symptoms and advanced considerably. Ultrasonography shows that the symptoms identify ovulation precisely. According to the World Health Organisation, 93% of women everywhere can identify the symptoms, which distinguish adequately between the fertile and infertile phases of the menstrual cycle. Most pregnancies during trials of natural family planning occur after intercourse at times recognised by couples as fertile. Thus pregnancy rates have depended on the motivation of couples. Increasingly studies show that rates equivalent to those with other contraceptive methods are readily achieved in the developed and developing worlds. Indeed, a study of 19,843 poor women in India had a pregnancy rate approaching zero. Natural family planning is cheap, effective, without side effects, and may be particularly acceptable to the efficacious among people in areas of poverty.

  19. Characterisation of FOGs in grease trap waste from the processing of chickens in Thailand.

    PubMed

    Nitayapat, Nuttakan; Chitprasert, Pakamon

    2014-06-01

    Industrial firms that kill and process chickens generate wastewater that contains fat, oil, and grease (FOG). The FOGs are located in the fatty waste that is collected by floatation in grease traps. Chemical and physical characterisation of FOGs would provide useful information that would help in the development of methods designed to decrease the extent of pollution caused by disposal of the waste and to utilise commercially some of its lipid constituents. Employing these methods would enhance the profitability and competitive potential of these commercial organisations. Samples of grease trap waste from 14 firms in central Thailand have been examined. Due to the very different schemes of waste management employed by these firms, the physical appearance of their fatty wastes showed considerable variation. The chemical and physical properties of the FOGs present in these wastes showed considerable variation also. Large amounts of free fatty acids (10-70% as oleic acid) were detected in most of the 14 wastes and palmitic, cis-9-oleic, cis,cis-9,12-linoleic, stearic, and palmitoleic acids were the predominant species of free and esterified acids. Most of the FOGs were solid at temperatures below 40 °C. Many of them contained traces of heavy metals (Cu and Pb) and some contained traces of the pesticides dimethoate and cypermethrin. The content of these potentially hazardous substances would have to be considered very carefully before discarding the fatty wastes and during the development of methods designed to isolate their potentially profitable lipid constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Topography of hidden objects using THz digital holography with multi-beam interferences.

    PubMed

    Valzania, Lorenzo; Zolliker, Peter; Hack, Erwin

    2017-05-15

    We present a method for the separation of the signal scattered from an object hidden behind a THz-transparent sample in the framework of THz digital holography in reflection. It combines three images of different interference patterns to retrieve the amplitude and phase distribution of the object beam. Comparison of simulated with experimental images obtained from a metallic resolution target behind a Teflon plate demonstrates that the interference patterns can be described in the simple form of three-beam interference. Holographic reconstructions after the application of the method show a considerable improvement compared to standard reconstructions exclusively based on Fourier transform phase retrieval.

  1. Hydrogen Storage for Aircraft Applications Overview

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.; Kohout, Lisa (Technical Monitor)

    2002-01-01

    Advances in fuel cell technology have brought about their consideration as sources of power for aircraft. This power can be utilized to run aircraft systems or even provide propulsion power. One of the key obstacles to utilizing fuel cells on aircraft is the storage of hydrogen. An overview of the potential methods of hydrogen storage was compiled. This overview identifies various methods of hydrogen storage and points out their advantages and disadvantages relative to aircraft applications. Minimizing weight and volume are the key aspects to storing hydrogen within an aircraft. An analysis was performed to show how changes in certain parameters of a given storage system affect its mass and volume.

  2. DOA Finding with Support Vector Regression Based Forward-Backward Linear Prediction.

    PubMed

    Pan, Jingjing; Wang, Yide; Le Bastard, Cédric; Wang, Tianzhen

    2017-05-27

    Direction-of-arrival (DOA) estimation has drawn considerable attention in array signal processing, particularly with coherent signals and a limited number of snapshots. Forward-backward linear prediction (FBLP) is able to directly deal with coherent signals. Support vector regression (SVR) is robust with small samples. This paper proposes the combination of the advantages of FBLP and SVR in the estimation of DOAs of coherent incoming signals with low snapshots. The performance of the proposed method is validated with numerical simulations in coherent scenarios, in terms of different angle separations, numbers of snapshots, and signal-to-noise ratios (SNRs). Simulation results show the effectiveness of the proposed method.

  3. Construction of an amperometric ascorbate biosensor using epoxy resin membrane bound Lagenaria siceraria fruit ascorbate oxidase.

    PubMed

    Pundir, C S; Chauhan, Nidhi; Jyoti

    2011-06-01

    Ascorbate oxidase purified from Lagenaria siceraria fruit was immobilized onto epoxy resin "Araldite" membrane with 79.4% retention of initial activity of free enzyme. The biosensor showed optimum response within 15s at pH 5.8 and 35°C, which was directly proportional to ascorbate concentration ranging from 1-100μM. There was a good correlation (R(2) = 0.99) between serum ascorbic acid values by standard enzymic colorimetric method and the present method. The enzyme electrode was used for 200 times without considerable loss of activity during the span of 90 days when stored at 4°C.

  4. Bootstrap confidence levels for phylogenetic trees.

    PubMed

    Efron, B; Halloran, E; Holmes, S

    1996-07-09

    Evolutionary trees are often estimated from DNA or RNA sequence data. How much confidence should we have in the estimated trees? In 1985, Felsenstein [Felsenstein, J. (1985) Evolution 39, 783-791] suggested the use of the bootstrap to answer this question. Felsenstein's method, which in concept is a straightforward application of the bootstrap, is widely used, but has been criticized as biased in the genetics literature. This paper concerns the use of the bootstrap in the tree problem. We show that Felsenstein's method is not biased, but that it can be corrected to better agree with standard ideas of confidence levels and hypothesis testing. These corrections can be made by using the more elaborate bootstrap method presented here, at the expense of considerably more computation.

  5. Moiré deflectometry-based position detection for optical tweezers.

    PubMed

    Khorshad, Ali Akbar; Reihani, S Nader S; Tavassoly, Mohammad Taghi

    2017-09-01

    Optical tweezers have proven to be indispensable tools for pico-Newton range force spectroscopy. A quadrant photodiode (QPD) positioned at the back focal plane of an optical tweezers' condenser is commonly used for locating the trapped object. In this Letter, for the first time, to the best of our knowledge, we introduce a moiré pattern-based detection method for optical tweezers. We show, both theoretically and experimentally, that this detection method could provide considerably better position sensitivity compared to the commonly used detection systems. For instance, position sensitivity for a trapped 2.17 μm polystyrene bead is shown to be 71% better than the commonly used QPD-based detection method. Our theoretical and experimental results are in good agreement.

  6. Bioelectrochemical removal of carbon dioxide (CO2): an innovative method for biogas upgrading.

    PubMed

    Xu, Heng; Wang, Kaijun; Holmes, Dawn E

    2014-12-01

    Innovative methods for biogas upgrading based on biological/in-situ concepts have started to arouse considerable interest. Bioelectrochemical removal of CO2 for biogas upgrading was proposed here and demonstrated in both batch and continuous experiments. The in-situ biogas upgrading system seemed to perform better than the ex-situ one, but CO2 content was kept below 10% in both systems. The in-situ system's performance was further enhanced under continuous operation. Hydrogenotrophic methanogenesis and alkali production with CO2 absorption could be major contributors to biogas upgrading. Molecular studies showed that all the biocathodes associated with biogas upgrading were dominated by sequences most similar to the same hydrogenotrophic methanogen species, Methanobacterium petrolearium (97-99% sequence identity). Conclusively, bioelectrochemical removal of CO2 showed great potential for biogas upgrading. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Oxygen Distributions—Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network

    PubMed Central

    Bernhardt, Peter

    2016-01-01

    Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529

  8. Predicting the limits to tree height using statistical regressions of leaf traits.

    PubMed

    Burgess, Stephen S O; Dawson, Todd E

    2007-01-01

    Leaf morphology and physiological functioning demonstrate considerable plasticity within tree crowns, with various leaf traits often exhibiting pronounced vertical gradients in very tall trees. It has been proposed that the trajectory of these gradients, as determined by regression methods, could be used in conjunction with theoretical biophysical limits to estimate the maximum height to which trees can grow. Here, we examined this approach using published and new experimental data from tall conifer and angiosperm species. We showed that height predictions were sensitive to tree-to-tree variation in the shape of the regression and to the biophysical endpoints selected. We examined the suitability of proposed end-points and their theoretical validity. We also noted that site and environment influenced height predictions considerably. Use of leaf mass per unit area or leaf water potential coupled with vulnerability of twigs to cavitation poses a number of difficulties for predicting tree height. Photosynthetic rate and carbon isotope discrimination show more promise, but in the second case, the complex relationship between light, water availability, photosynthetic capacity and internal conductance to CO(2) must first be characterized.

  9. Solvent selection in ultrasonic-assisted emulsification microextraction: Comparison between high- and low-density solvents by means of novel type of extraction vessel.

    PubMed

    Nojavan, Saeed; Gorji, Tayebeh; Davarani, Saied Saeed Hosseiny; Morteza-Najarian, Amin

    2014-08-01

    There are numerous published reports about dispersive liquid phase microextraction of the wide range of substances, however, till now no broadly accepted systematic and purpose oriented selection of extraction solvent has been proposed. Most works deal with the optimization of available solvents without adequate pre-consideration of properness. In this study, it is tried to compare the performances of low- and high-density solvents at the same conditions by means of novel type of extraction vessel with head and bottom conical shape. Extraction efficiencies of seven basic pharmaceutical compounds using eighteen common organic solvents were studied in this work. It was much easier to work with high-density solvents and they mostly showed better performances. This work shows that although exact predicting the performance of the solvents is multifaceted case but the pre-consideration of initial selection of solvents with attention to the physiochemical properties of the desired analytes is feasible and promising. Finally, the practicality of the method for extraction from urine and plasma samples was investigated. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. An Efficient Method to Detect Mutual Overlap of a Large Set of Unordered Images for Structure-From

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zhan, Z. Q.; Heipke, C.

    2017-05-01

    Recently, low-cost 3D reconstruction based on images has become a popular focus of photogrammetry and computer vision research. Methods which can handle an arbitrary geometric setup of a large number of unordered and convergent images are of particular interest. However, determining the mutual overlap poses a considerable challenge. We propose a new method which was inspired by and improves upon methods employing random k-d forests for this task. Specifically, we first derive features from the images and then a random k-d forest is used to find the nearest neighbours in feature space. Subsequently, the degree of similarity between individual images, the image overlaps and thus images belonging to a common block are calculated as input to a structure-from-motion (sfm) pipeline. In our experiments we show the general applicability of the new method and compare it with other methods by analyzing the time efficiency. Orientations and 3D reconstructions were successfully conducted with our overlap graphs by sfm. The results show a speed-up of a factor of 80 compared to conventional pairwise matching, and of 8 and 2 compared to the VocMatch approach using 1 and 4 CPU, respectively.

  11. Non-contact method of search and analysis of pulsating vessels

    NASA Astrophysics Data System (ADS)

    Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.

    2018-04-01

    Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.

  12. Methodological considerations regarding the use of inorganic 197Hg(II) radiotracer to assess mercury methylation potential rates in lake sediment

    USGS Publications Warehouse

    Perez, Catan S.; Guevara, S.R.; Marvin-DiPasquale, M.; Magnavacca, C.; Cohen, I.M.; Arribere, M.

    2007-01-01

    Methodological considerations on the determination of benthic methyl-mercury (CH3Hg) production potentials were investigated on lake sediment, using 197Hg radiotracer. Three methods to arrest bacterial activity were compared: flash freezing, thermal sterilization, and ??-irradiation. Flash freezing showed similar CH3Hg recoveries as thermal sterilization, which was both 50% higher than the recoveries obtained with ??-ray irradiation. No additional radiolabel was recovered in kill-control samples after an additional 24 or 65 h of incubation, suggesting that all treatments were effective at arresting Hg(II)-methylating bacterial activity, and that the initial recoveries are likely due to non-methylated 197Hg(II) carry-over in the organic extraction and/or [197Hg]CH3Hg produced via abiotic reactions. Two CH3Hg extraction methods from sediment were compared: (a) direct extraction into toluene after sediment leaching with CuSO4 and HCl and (b) the same extraction with an additional back-extraction step to thiosulphate. Similar information was obtained with both methods, but the low efficiency observed and the extra work associated with the back-extraction procedure represent significant disadvantages, even tough the direct extraction involves higher Hg(II) carry over. ?? 2007 Elsevier Ltd. All rights reserved.

  13. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  14. Methods to improve traffic flow and noise exposure estimation on minor roads.

    PubMed

    Morley, David W; Gulliver, John

    2016-09-01

    Address-level estimates of exposure to road traffic noise for epidemiological studies are dependent on obtaining data on annual average daily traffic (AADT) flows that is both accurate and with good geographical coverage. National agencies often have reliable traffic count data for major roads, but for residential areas served by minor roads, especially at national scale, such information is often not available or incomplete. Here we present a method to predict AADT at the national scale for minor roads, using a routing algorithm within a geographical information system (GIS) to rank roads by importance based on simulated journeys through the road network. From a training set of known minor road AADT, routing importance is used to predict AADT on all UK minor roads in a regression model along with the road class, urban or rural location and AADT on the nearest major road. Validation with both independent traffic counts and noise measurements show that this method gives a considerable improvement in noise prediction capability when compared to models that do not give adequate consideration to minor road variability (Spearman's rho. increases from 0.46 to 0.72). This has significance for epidemiological cohort studies attempting to link noise exposure to adverse health outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Assessment of Differential Item Functioning in Health-Related Outcomes: A Simulation and Empirical Analysis with Hierarchical Polytomous Data

    PubMed Central

    Sharafi, Zahra

    2017-01-01

    Background The purpose of this study was to evaluate the effectiveness of two methods of detecting differential item functioning (DIF) in the presence of multilevel data and polytomously scored items. The assessment of DIF with multilevel data (e.g., patients nested within hospitals, hospitals nested within districts) from large-scale assessment programs has received considerable attention but very few studies evaluated the effect of hierarchical structure of data on DIF detection for polytomously scored items. Methods The ordinal logistic regression (OLR) and hierarchical ordinal logistic regression (HOLR) were utilized to assess DIF in simulated and real multilevel polytomous data. Six factors (DIF magnitude, grouping variable, intraclass correlation coefficient, number of clusters, number of participants per cluster, and item discrimination parameter) with a fully crossed design were considered in the simulation study. Furthermore, data of Pediatric Quality of Life Inventory™ (PedsQL™) 4.0 collected from 576 healthy school children were analyzed. Results Overall, results indicate that both methods performed equivalently in terms of controlling Type I error and detection power rates. Conclusions The current study showed negligible difference between OLR and HOLR in detecting DIF with polytomously scored items in a hierarchical structure. Implications and considerations while analyzing real data were also discussed. PMID:29312463

  16. 77 FR 49451 - Agency Information Collection Activities: Consideration of Deferred Action for Childhood Arrivals...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-16

    ...-NEW] Agency Information Collection Activities: Consideration of Deferred Action for Childhood Arrivals... Deferred Action for Childhood Arrivals, 1615-NEW'' in the subject box. Regardless of the method used for... collection. (2) Title of the Form/Collection: Consideration of Deferred Action for Childhood Arrivals. (3...

  17. Afterword: Considerations for Future Practice of Assessment and Accountability

    ERIC Educational Resources Information Center

    Bresciani, Marilee J.

    2013-01-01

    This afterword offers challenges and considerations as the assessment movement continues to develop. The author offers some simple considerations for readers to ponder as they advance their evidence-based decision making processes, and encourages others to use these methods within the context of recent neuroscientific evidence that learning and…

  18. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  19. Thermal lattice BGK models for fluid dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Jian

    1998-11-01

    As an alternative in modeling fluid dynamics, the Lattice Boltzmann method has attracted considerable attention. In this thesis, we shall present a general form of thermal Lattice BGK. This form can handle large differences in density, temperature, and high Mach number. This generalized method can easily model gases with different adiabatic index values. The numerical transport coefficients of this model are estimated both theoretically and numerically. Their dependency on the sizes of integration steps in time and space, and on the flow velocity and temperature, are studied and compared with other established CFD methods. This study shows that the numerical viscosity of the Lattice Boltzmann method depends linearly on the space interval, and on the flow velocity as well for supersonic flow. This indicates this method's limitation in modeling high Reynolds number compressible thermal flow. On the other hand, the Lattice Boltzmann method shows promise in modeling micro-flows, i.e., gas flows in micron-sized devices. A two-dimensional code has been developed based on the conventional thermal lattice BGK model, with some modifications and extensions for micro- flows and wall-fluid interactions. Pressure-driven micro- channel flow has been simulated. Results are compared with experiments and simulations using other methods, such as a spectral element code using slip boundary condition with Navier-Stokes equations and a Direct Simulation Monte Carlo (DSMC) method.

  20. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    PubMed

    Lagerlöf, Jakob H; Bernhardt, Peter

    2016-01-01

    To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  1. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  2. Force wave transmission through the human locomotor system.

    PubMed

    Voloshin, A; Wosk, J; Brull, M

    1981-02-01

    A method to measure the capability of the human shock absorber system to attenuate input dynamic loading during the gait is presented. The experiments were carried out with two groups: healthy subjects and subjects with various pathological conditions. The results of the experiments show a considerable difference in the capability of each group's shock absorbers to attenuate force transmitted through the locomotor system. Comparison shows that healthy subjects definitely possess a more efficient shock-absorbing capacity than do those subjects with joint disorders. Presented results show that degenerative changes in joints reduce their shock absorbing capacity, which leads to overloading of the next shock absorber in the locomotor system. So, the development of osteoarthritis may be expected to result from overloading of a shock absorber's functional capacity.

  3. Direct fusion of geostationary meteorological satellite visible and infrared images based on thermal physical properties.

    PubMed

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-05

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.

  4. Direct Fusion of Geostationary Meteorological Satellite Visible and Infrared Images Based on Thermal Physical Properties

    PubMed Central

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-01

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749

  5. Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    PubMed Central

    Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas

    2009-01-01

    Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712

  6. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    PubMed

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  7. Usage of the back-propagation method for alphabet recognition

    NASA Astrophysics Data System (ADS)

    Shaila Sree, R. N.; Eswaran, Kumar; Sundararajan, N.

    1999-03-01

    Artificial Neural Networks play a pivotal role in the branch of Artificial Intelligence. They can be trained efficiently for a variety of tasks using different methods, of which the Back Propagation method is one among them. The paper studies the choosing of various design parameters of a neural network for the Back Propagation method. The study shows that when these parameters are properly assigned, the training task of the net is greatly simplified. The character recognition problem has been chosen as a test case for this study. A sample space of different handwritten characters of the English alphabet was gathered. A Neural net is finally designed taking many the design aspects into consideration and trained for different styles of writing. Experimental results are reported and discussed. It has been found that an appropriate choice of the design parameters of the neural net for the Back Propagation method reduces the training time and improves the performance of the net.

  8. Influence of architecture on the kinetic stability of molecular assemblies.

    PubMed

    Patel, Amesh B; Allen, Stephanie; Davies, Martyn C; Roberts, Clive J; Tendler, Saul J B; Williams, Philip M

    2004-02-11

    The strength of a multimolecular system depends on the number of interactions that hold it together. Using dynamic force spectroscopy, we show how the kinetic stability of a system decreases as the number of molecular bonds is increased, as predicted by theory. The data raise important considerations for experimental tests of bond strength and, as a paradigm, suggest both routes to and pitfalls in methods for computational simulation of molecular transitions, such as ligand binding and protein folding.

  9. 77 FR 47677 - Duke Energy Carolinas, LLC, McGuire Nuclear Station, Units 1 and 2, Notice of Consideration of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... Facility Operating License, Proposed No Significant Hazards Consideration Determination, and Opportunity... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID... related to this document by any of the following methods: Federal Rulemaking Web Site: Go to http://www...

  10. 31 CFR Appendix to Part 351 - Tax Considerations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... 351, App. A Appendix to Part 351—Tax Considerations 1. What are some general tax considerations... any other obligations purchased on a discount basis. (b) Changing methods. If you use the cash basis... primary owner. (d) The purchase of a Series EE savings bond as a gift may have gift tax consequences. ...

  11. 31 CFR Appendix to Part 351 - Tax Considerations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... 351, App. A Appendix to Part 351—Tax Considerations 1. What are some general tax considerations... any other obligations purchased on a discount basis. (b) Changing methods. If you use the cash basis... primary owner. (d) The purchase of a Series EE savings bond as a gift may have gift tax consequences. ...

  12. Emulsifying and foaming properties of amaranth seed protein isolates.

    PubMed

    Fidantsi, A; Doxastakis, G

    2001-07-01

    The emulsifying and foaming properties of amaranth seed protein isolates prepared by wet extraction methods, such as isoelectric precipitation and dialysis, were investigated. The various isolates differ from each other in many ways. The isolate prepared by isoelectric precipitation mainly contains the globulin but not the albumin fraction and a considerable amount of polysaccharides, while the other isolate prepared by the dialysis method contains all the globulin and albumin fractions. The protein-polysaccharide complexes enhance emulsion stability due to steric repulsion effects. Measurements of the emulsion stability show that the studied protein isolates act as effective stabilizing agents. Foam expansion is dominated by the surface activity and availability of protein in the solution, while foam stability is determined by the properties of the interfacial layer. The results show that amaranth protein isolates act as an effective foaming agent. Both foaming properties intensified from the presence of protein-polysaccharide complexes.

  13. General simulation algorithm for autocorrelated binary processes.

    PubMed

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  14. Magnetic properties of permalloy wires in vycor capillaries

    NASA Astrophysics Data System (ADS)

    Lubitz, P.; Ayers, J. D.; Davis, A.

    1991-11-01

    Thin wires of NiFe alloys with compositions near 80% Ni were prepared by melting the alloy in vycor tubes and drawing fibers from the softened glass. The resulting fibers consist of relatively thick-walled vycor capillaries containing permalloy wires filling a few percent of the volume. The wires are continuous over considerable lengths, uniform in circular cross section, nearly free of contact with the walls and can be drawn to have diameters less than 1 μm. Their magnetic properties are generally similar to bulk permalloy, but show a variety of magnetic switching behaviors for fields along the wire axis, depending on composition, wire diameter, and thermal history. As pulled, the wires can show sharp switching, reversible rotation or mixed behavior. This method can produce NiFe alloy wires suitable for use in applications as sensor, memory or inductive elements; other alloys, such as supermalloy and sendust, also can be fabricated as fine wires by this method.

  15. UHPLC-TQ-MS Coupled with Multivariate Statistical Analysis to Characterize Nucleosides, Nucleobases and Amino Acids in Angelicae Sinensis Radix Obtained by Different Drying Methods.

    PubMed

    Zhu, Shaoqing; Guo, Sheng; Duan, Jin-Ao; Qian, Dawei; Yan, Hui; Sha, Xiuxiu; Zhu, Zhenhua

    2017-06-01

    To explore the nutrients in roots of Angelica sinensis (Angelicae Sinensis Radix, ASR), a medicinal and edible plant, and evaluate its nutritional value, a rapid and reliable UHPLC-TQ-MS method was established and used to determine the potential nutritional compounds, including nucleosides, nucleobases and amino acids, in 50 batches of ASR samples obtained using two drying methods. The results showed that ASR is a healthy food rich in nucleosides, nucleobases and amino acids, especially arginine. The total average content of nucleosides and nucleobases in all ASR samples was 3.94 mg/g, while that of amino acids reached as high as 61.79 mg/g. Principle component analysis showed that chemical profile differences exist between the two groups of ASR samples prepared using different drying methods, and the contents of nutritional compounds in samples dried with the tempering-intermittent drying processing method (TIDM) were generally higher than those dried using the traditional solar processing method. The above results suggest that ASR should be considered an ideal healthy food and TIDM could be a suitable drying method for ASR when taking nucleosides, nucleobases and amino acids as the major consideration for their known human health benefits.

  16. Spectrum analysis on quality requirements consideration in software design documents.

    PubMed

    Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji

    2013-12-01

    Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called "spectrum analysis for quality requirements" is applied not only to requirements specifications but also to design documents. The technique enables us to derive the spectrum of a document, and quality requirements considerations in the document are numerically represented in the spectrum. We can thus objectively identify whether the considerations of quality requirements in a requirements document are adapted to its design document. To validate the method, we applied it to commercial software systems with the help of a supporting tool, and we confirmed that the method worked well.

  17. Transonic CFD applications at Boeing

    NASA Technical Reports Server (NTRS)

    Tinoco, E. N.

    1989-01-01

    The use of computational methods for three dimensional transonic flow design and analysis at the Boeing Company is presented. A range of computational tools consisting of production tools for every day use by project engineers, expert user tools for special applications by computational researchers, and an emerging tool which may see considerable use in the near future are described. These methods include full potential and Euler solvers, some coupled to three dimensional boundary layer analysis methods, for transonic flow analysis about nacelle, wing-body, wing-body-strut-nacelle, and complete aircraft configurations. As the examples presented show, such a toolbox of codes is necessary for the variety of applications typical of an industrial environment. Such a toolbox of codes makes possible aerodynamic advances not previously achievable in a timely manner, if at all.

  18. Twelve tips for getting started using mixed methods in medical education research.

    PubMed

    Lavelle, Ellen; Vuk, Jasna; Barber, Carolyn

    2013-04-01

    Mixed methods research, which is gaining popularity in medical education, provides a new and comprehensive approach for addressing teaching, learning, and evaluation issues in the field. The aim of this article is to provide medical education researchers with 12 tips, based on consideration of current literature in the health professions and in educational research, for conducting and disseminating mixed methods research. Engaging in mixed methods research requires consideration of several major components: the mixed methods paradigm, types of problems, mixed method designs, collaboration, and developing or extending theory. Mixed methods is an ideal tool for addressing a full range of problems in medical education to include development of theory and improving practice.

  19. Improving the Stability of Astaxanthin by Microencapsulation in Calcium Alginate Beads.

    PubMed

    Lin, Shen-Fu; Chen, Ying-Chen; Chen, Ray-Neng; Chen, Ling-Chun; Ho, Hsiu-O; Tsung, Yu-Han; Sheu, Ming-Thau; Liu, Der-Zen

    2016-01-01

    There has been considerable interest in the biological functions of astaxanthin and its potential applications in the nutraceutical, cosmetics, food, and feed industries in recent years. However, the unstable structure of astaxanthin considerably limits its application. Therefore, this study reports the encapsulation of astaxanthin in calcium alginate beads using the extrusion method to improve its stability. This study also evaluates the stability of the encapsulated astaxanthin under different storage conditions. The evaluation of astaxanthin stability under various environmental factors reveals that temperature is the most influential environmental factor in astaxanthin degradation. Stability analysis shows that, regardless of the formulation used, the content of astaxanthin encapsulated in alginate beads remains above 90% of the original amount after 21 days of storage at 25°C. These results suggest that the proposed technique is a promising way to enhance the stability of other sensitive compounds.

  20. Improving the Stability of Astaxanthin by Microencapsulation in Calcium Alginate Beads

    PubMed Central

    Lin, Shen-Fu; Chen, Ying-Chen; Chen, Ray-Neng; Chen, Ling-Chun; Ho, Hsiu-O; Tsung, Yu-Han; Sheu, Ming-Thau; Liu, Der-Zen

    2016-01-01

    There has been considerable interest in the biological functions of astaxanthin and its potential applications in the nutraceutical, cosmetics, food, and feed industries in recent years. However, the unstable structure of astaxanthin considerably limits its application. Therefore, this study reports the encapsulation of astaxanthin in calcium alginate beads using the extrusion method to improve its stability. This study also evaluates the stability of the encapsulated astaxanthin under different storage conditions. The evaluation of astaxanthin stability under various environmental factors reveals that temperature is the most influential environmental factor in astaxanthin degradation. Stability analysis shows that, regardless of the formulation used, the content of astaxanthin encapsulated in alginate beads remains above 90% of the original amount after 21 days of storage at 25°C. These results suggest that the proposed technique is a promising way to enhance the stability of other sensitive compounds. PMID:27093175

  1. Carbon footprint assessment of recycling technologies for rare earth elements: A case study of recycling yttrium and europium from phosphor.

    PubMed

    Hu, Allen H; Kuo, Chien-Hung; Huang, Lance H; Su, Chao-Chin

    2017-02-01

    Rare earth elements are key raw materials in high-technology industries. Mining activities and manufacturing processes of such industries have caused considerable environmental impacts, such as soil erosion, vegetation destruction, and various forms of pollution. Sustaining the long-term supply of rare earth elements is difficult because of the global shortage of rare earth resources. The diminishing supply of rare earth elements has attracted considerable concern because many industrialized countries regarded such elements as important strategic resources for economic growth. This study aims to explore the carbon footprints of yttrium and europium recovery techniques from phosphor. Two extraction recovery methods, namely, acid extraction and solvent extraction, were selected for the analysis and comparison of carbon footprints. The two following functional units were used: (1) the same phosphor amounts for specific Y and Eu recovery concentrations, and (2) the same phosphor amounts for extraction. For acid extraction method, two acidic solutions (H 2 SO 4 and HCl) were used at two different temperatures (60 and 90°C). For solvent extraction method, acid leaching was performed followed by ionic liquid extraction. Carbon footprints from acid and solvent extraction methods were estimated to be 10.1 and 10.6kgCO 2 eq, respectively. Comparison of the carbon emissions of the two extraction methods shows that the solvent extraction method has significantly higher extraction efficiency, even though acid extraction method has a lower carbon footprint. These results may be used to develop strategies for life cycle management of rare earth resources to realize sustainable usage. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A case study on the method-induced difference in the chemical properties and biodegradability of soil water extractable organic carbon of a granitic forest soil.

    PubMed

    Wu, Yue; Jiang, Ying

    2016-09-15

    Water extractable organic carbon (WEOC) plays important roles in soil dissolved organic matter (DOM) research. In the present study, we have detected the chemical properties and biodegradability of WEOC obtained from one granitic forest soil with four commonly used or suggested extraction methods, to study the potential methodological influence in soil DOM research. Results showed great difference in both chemical properties and biodegradation of WEOC from various methods. For the chosen soil, compared to that from fresh soil, WEOC from dried soil contained large proportion of HIN, Base fractions and labile O-alkyl components which might be derived from microbial cell lysis, and showed low fluorescence characteristics, exhibiting great biodegradability. Similarly, WEOC extracted under low temperature and short time conditions showed low fluorescence characteristics and exhibited considerable biodegradability. Conversely, WEOC, which might be potentially subjected to decomposition and loss during extraction, contained higher percentages of HOA fractions and aromatic alkyl and aryl components, and showed high fluorescence characteristics, exhibiting low biodegradability. WEOC extracted in moderate time and temperature showed moderate biodegradability. These method-induced differences implied the direct comparison of the results from similar works is difficult, as we considered here a specific forest soil and other authors other soil types and uses. However, the complexity in comparison reminds that the methodological influence be paid more attention in future soil WEOC researches. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Advanced microscopic methods for the detection of adhesion barriers in immunology in medical imaging

    NASA Astrophysics Data System (ADS)

    Lawrence, Shane

    2017-07-01

    Advanced methods of microscopy and advanced techniques of analysis stemming therefrom have developed greatly in the past few years.The use of single discrete methods has given way to the combination of methods which means an increase in data for processing to progress to the analysis and diagnosis of ailments and diseases which can be viewed by each and any method.This presentation shows the combination of such methods and gives example of the data which arises from each individual method and the combined methodology and suggests how such data can be streamlined to enable conclusions to be drawn about the particular biological and biochemical considerations that arise.In this particular project the subject of the methodology was human lactoferrin and the relation of the adhesion properties of hlf in the overcoming of barriers to adhesion mainly on the perimeter of the cellular unit and how this affects the process of immunity in any particular case.

  4. Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation

    DOE PAGES

    Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...

    1995-01-01

    In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less

  5. Using discrete choice experiments within a cost-benefit analysis framework: some considerations.

    PubMed

    McIntosh, Emma

    2006-01-01

    A great advantage of the stated preference discrete choice experiment (SPDCE) approach to economic evaluation methodology is its immense flexibility within applied cost-benefit analyses (CBAs). However, while the use of SPDCEs in healthcare has increased markedly in recent years there has been a distinct lack of equivalent CBAs in healthcare using such SPDCE-derived valuations. This article outlines specific issues and some practical suggestions for consideration relevant to the development of CBAs using SPDCE-derived benefits. The article shows that SPDCE-derived CBA can adopt recent developments in cost-effectiveness methodology including the cost-effectiveness plane, appropriate consideration of uncertainty, the net-benefit framework and probabilistic sensitivity analysis methods, while maintaining the theoretical advantage of the SPDCE approach. The concept of a cost-benefit plane is no different in principle to the cost-effectiveness plane and can be a useful tool for reporting and presenting the results of CBAs.However, there are many challenging issues to address for the advancement of CBA methodology using SPCDEs within healthcare. Particular areas for development include the importance of accounting for uncertainty in SPDCE-derived willingness-to-pay values, the methodology of SPDCEs in clinical trial settings and economic models, measurement issues pertinent to using SPDCEs specifically in healthcare, and the importance of issues such as consideration of the dynamic nature of healthcare and the resulting impact this has on the validity of attribute definitions and context.

  6. Bioanalytical method transfer considerations of chromatographic-based assays.

    PubMed

    Williard, Clark V

    2016-07-01

    Bioanalysis is an important part of the modern drug development process. The business practice of outsourcing and transferring bioanalytical methods from laboratory to laboratory has increasingly become a crucial strategy for successful and efficient delivery of therapies to the market. This chapter discusses important considerations when transferring various types of chromatographic-based assays in today's pharmaceutical research and development environment.

  7. Allele-specific HLA-DR typing by mass spectrometry: an alternative to hybridization-based typing methods.

    PubMed

    Worrall, T A; Schmeckpeper, B J; Corvera, J S; Cotter, R J

    2000-11-01

    The primer oligomer base extension (PROBE) reaction, combined with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, is used to characterize HLA-DR2 polymorphism. Alleles are distinguished rapidly and accurately by measuring the mass of primer extension products at every known variable region of HLA-DR2 alleles. Since differentiation of alleles by PROBE relies on measuring differences in extension product mass rather than differences in hybridization properties, mistyped alleles resulting from nonspecific hybridization are absent. The method shows considerable potential for high-throughput screening of HLA-DR polymorphism in a chip-based format, including rapid tissue typing of unrelated volunteer donors.

  8. Scanning fluorescence correlation spectroscopy comes full circle.

    PubMed

    Gunther, German; Jameson, David M; Aguilar, Joao; Sánchez, Susana A

    2018-02-07

    In this article, we review the application of fluorescence correlation spectroscopy (FCS) methods to studies on live cells. We begin with a brief overview of the theory underlying FCS, highlighting the type of information obtainable. We then focus on circular scanning FCS. Specifically, we discuss instrumentation and data analysis and offer some considerations regarding sample preparation. Two examples from the literature are discussed in detail. First, we show how this method, coupled with the photon counting histogram analysis, can provide information on yeast ribosomal structures in live cells. The combination of scanning FCS with dual channel detection in the study of lipid domains in live cells is also illustrated. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Alignment theory of parallel-beam computed tomography image reconstruction for elastic-type objects using virtual focusing method.

    PubMed

    Jun, Kyungtaek; Kim, Dongwook

    2018-01-01

    X-ray computed tomography has been studied in various fields. Considerable effort has been focused on reconstructing the projection image set from a rigid-type specimen. However, reconstruction of images projected from an object showing elastic motion has received minimal attention. In this paper, a mathematical solution to reconstructing the projection image set obtained from an object with specific elastic motions-periodically, regularly, and elliptically expanded or contracted specimens-is proposed. To reconstruct the projection image set from expanded or contracted specimens, methods are presented for detection of the sample's motion modes, mathematical rescaling of pixel values, and conversion of the projection angle for a common layer.

  10. Pure detection of the acoustic spin pumping in Pt/YIG/PZT structures

    NASA Astrophysics Data System (ADS)

    Uchida, Ken-ichi; Qiu, Zhiyong; Kikkawa, Takashi; Saitoh, Eiji

    2014-11-01

    The acoustic spin pumping (ASP) stands for the generation of a spin voltage from sound waves in a ferromagnet/paramagnet junction. In this letter, we propose and demonstrate a method for pure detection of the ASP, which enables the separation of sound-wave-driven spin currents from the spin Seebeck effect due to the heating of a sample caused by a sound-wave injection. Our demonstration using a Pt/YIG/PZT sample shows that the ASP signal in this structure measured by a conventional method is considerably offset by the heating signal and that the pure ASP signal is one order of magnitude greater than that reported in the previous study.

  11. Deep multi-scale location-aware 3D convolutional neural networks for automated detection of lacunes of presumed vascular origin.

    PubMed

    Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; Bergkamp, Mayra; Wissink, Joost; Obels, Jiri; Keizer, Karlijn; de Leeuw, Frank-Erik; Ginneken, Bram van; Marchiori, Elena; Platel, Bram

    2017-01-01

    Lacunes of presumed vascular origin (lacunes) are associated with an increased risk of stroke, gait impairment, and dementia and are a primary imaging feature of the small vessel disease. Quantification of lacunes may be of great importance to elucidate the mechanisms behind neuro-degenerative disorders and is recommended as part of study standards for small vessel disease research. However, due to the different appearance of lacunes in various brain regions and the existence of other similar-looking structures, such as perivascular spaces, manual annotation is a difficult, elaborative and subjective task, which can potentially be greatly improved by reliable and consistent computer-aided detection (CAD) routines. In this paper, we propose an automated two-stage method using deep convolutional neural networks (CNN). We show that this method has good performance and can considerably benefit readers. We first use a fully convolutional neural network to detect initial candidates. In the second step, we employ a 3D CNN as a false positive reduction tool. As the location information is important to the analysis of candidate structures, we further equip the network with contextual information using multi-scale analysis and integration of explicit location features. We trained, validated and tested our networks on a large dataset of 1075 cases obtained from two different studies. Subsequently, we conducted an observer study with four trained observers and compared our method with them using a free-response operating characteristic analysis. Shown on a test set of 111 cases, the resulting CAD system exhibits performance similar to the trained human observers and achieves a sensitivity of 0.974 with 0.13 false positives per slice. A feasibility study also showed that a trained human observer would considerably benefit once aided by the CAD system.

  12. Covalent Heterogenization of a Discrete Mn(II) Bis-Phen Complex by a Metal-Template/Metal-Exchange Method: An Epoxidation Catalyst with Enhanced Reactivity

    PubMed Central

    Terry, Tracy J.; Stack, T. Daniel P.

    2009-01-01

    Considerable attention has been devoted to the immobilization of discrete epoxidation catalysts onto solid supports due to the possible benefits of site isolation such as increased catalyst stability, catalyst recycling, and product separation. A synthetic metal-template/metal-exchange method to imprint a covalently attached bis-1,10-phenanthroline coordination environment onto high-surface area, mesoporous SBA-15 silica is reported herein along with the epoxidation reactivity once reloaded with manganese. Comparisons of this imprinted material with material synthesized by random grafting of the ligand show that the template method creates more reproducible, solution-like bis-1,10-phenanthroline coordination at a variety of ligand loadings. Olefin epoxidation with peracetic acid shows the imprinted manganese catalysts have improved product selectivity for epoxides, greater substrate scope, more efficient use of oxidant, and higher reactivity than their homogeneous or grafted analogues independent of ligand loading. The randomly grafted manganese catalysts, however, show reactivity that varies with ligand loading while the homogeneous analogue degrades trisubstituted olefins and produces trans-epoxide products from cis-olefins. Efficient recycling behavior of the templated catalysts is also possible. PMID:18351763

  13. Considerations for multiple hypothesis correlation on tactical platforms

    NASA Astrophysics Data System (ADS)

    Thomas, Alan M.; Turpen, James E.

    2013-05-01

    Tactical platforms benefit greatly from the fusion of tracks from multiple sources in terms of increased situation awareness. As a necessary precursor to this track fusion, track-to-track association, or correlation, must first be performed. The related measurement-to-track fusion problem has been well studied with multiple hypothesis tracking and multiple frame assignment methods showing the most success. The track-to-track problem differs from this one in that measurements themselves are not available but rather track state update reports from the measuring sensors. Multiple hypothesis, multiple frame correlation systems have previously been considered; however, their practical implementation under the constraints imposed by tactical platforms is daunting. The situation is further exacerbated by the inconvenient nature of reports from legacy sensor systems on bandwidth- limited communications networks. In this paper, consideration is given to the special difficulties encountered when attempting the correlation of tracks from legacy sensors on tactical aircraft. Those difficulties include the following: covariance information from reporting sensors is frequently absent or incomplete; system latencies can create temporal uncertainty in data; and computational processing is severely limited by hardware and architecture. Moreover, consideration is given to practical solutions for dealing with these problems in a multiple hypothesis correlator.

  14. Emollient treatment of atopic dermatitis: latest evidence and clinical considerations

    PubMed Central

    Kung, Jeng Sum Charmaine; Ng, Wing Gi Gigi; Leung, Ting Fan

    2018-01-01

    Aim To review current classes of emollients in the market, their clinical efficacy in atopic dermatitis (AD) and considerations for choice of an emollient. Methods PubMed Clinical Queries under Clinical Study Categories (with Category limited to Therapy and Scope limited to Narrow) and Systematic Reviews were used as the search engine. Keywords of ‘emollient or moisturizer’ and ‘atopic dermatitis’ were used. Overview of findings Using the keywords of ‘emollient’ and ‘atopic dermatitis’, there were 105 and 36 hits under Clinical Study Categories (with Category limited to Therapy and Scope limited to Narrow) and Systematic Reviews, respectively. Plant-derived products, animal products and special ingredients were discussed. Selected proprietary products were tabulated. Conclusions A number of proprietary emollients have undergone trials with clinical data available on PubMed-indexed journals. Most moisturizers showed some beneficial effects, but there was generally no evidence that one moisturizer is superior to another. Choosing an appropriate emollient for AD patients would improve acceptability and adherence for emollient treatment. Physician’s recommendation is the primary consideration for patients when selecting a moisturizer/emollient; therefore, doctors should provide evidence-based information about these emollients. PMID:29692852

  15. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  16. An interval-parameter mixed integer multi-objective programming for environment-oriented evacuation management

    NASA Astrophysics Data System (ADS)

    Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.

    2010-05-01

    Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.

  17. Lane-changing model with dynamic consideration of driver's propensity

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyuan; Wang, Jianqiang; Zhang, Jinglei; Ban, Xuegang Jeff

    2015-07-01

    Lane-changing is the driver's selection result of the satisfaction degree in different lane driving conditions. There are many different factors influencing lane-changing behavior, such as diversity, randomicity and difficulty of measurement. So it is hard to accurately reflect the uncertainty of drivers' lane-changing behavior. As a result, the research of lane-changing models is behind that of car-following models. Driver's propensity is her/his emotion state or the corresponding preference of a decision or action toward the real objective traffic situations under the influence of various dynamic factors. It represents the psychological characteristics of the driver in the process of vehicle operation and movement. It is an important factor to influence lane-changing. In this paper, dynamic recognition of driver's propensity is considered during simulation based on its time-varying discipline and the analysis of the driver's psycho-physic characteristics. The Analytic Hierarchy Process (AHP) method is used to quantify the hierarchy of driver's dynamic lane-changing decision-making process, especially the influence of the propensity. The model is validated using real data. Test results show that the developed lane-changing model with the dynamic consideration of a driver's time-varying propensity and the AHP method are feasible and with improved accuracy.

  18. Theoretical considerations and measurements for phoropters

    NASA Astrophysics Data System (ADS)

    Zhang, Jiyan; Liu, Wenli; Sun, Jie

    2008-10-01

    A phoropter is one of the most popular ophthalmic instruments used in current optometry practice. The quality and verification of the instrument are of the utmost importance. In 1997, International Organization for Standardization published the first ISO standard for requirements of phoropters. However, in China, few standard and test method are suggested for phoropters. Research work on test method for phoropters was carried out early in 2004 by China National Institute of Metrology. In this paper, first, structure of phoropters is described. Then, theoretical considerations for its optical design are analyzed. Next, a newly developed instrument is introduced and measurements are taken. By calibration, the indication error of the instrument is not over 0.05m-1. Finally, measurement results show that the quality situation of phoropters is not as good as expected because of production and assembly error. Optical design shall be improved especially for combinations of both spherical and cylindrical lenses with higher power. Besides, optical requirements specified in ISO standard are found to be a little strict and hard to meet. A proposal for revision of this international standard is drafted and discussed on ISO meeting of 2007 held in Tokyo.

  19. Correspondence: In support of the IES method of evaluating light source colour rendition

    DOE PAGES

    Ashdown, I.; Aviles, G.; Bennett, L.; ...

    2015-11-20

    In this editorial, written as an open letter to the lighting community, we stand in support of widespread adoption of TM-30-15: The IES Method of Evaluating Light Source Color Rendition. We introduce important considerations related to light source color rendition, define the need for a new method of evaluation, provide a high-level overview of the IES method, discuss some of the practical considerations related to the development of the IES method and the consensus process, and conclude by inviting you to join us in support of the new measures and graphics described in TM-30-15.

  20. Semi-automated segmentation of solid and GGO nodules in lung CT images using vessel-likelihood derived from local foreground structure

    NASA Astrophysics Data System (ADS)

    Yaguchi, Atsushi; Okazaki, Tomoya; Takeguchi, Tomoyuki; Matsumoto, Sumiaki; Ohno, Yoshiharu; Aoyagi, Kota; Yamagata, Hitoshi

    2015-03-01

    Reflecting global interest in lung cancer screening, considerable attention has been paid to automatic segmentation and volumetric measurement of lung nodules on CT. Ground glass opacity (GGO) nodules deserve special consideration in this context, since it has been reported that they are more likely to be malignant than solid nodules. However, due to relatively low contrast and indistinct boundaries of GGO nodules, segmentation is more difficult for GGO nodules compared with solid nodules. To overcome this difficulty, we propose a method for accurately segmenting not only solid nodules but also GGO nodules without prior information about nodule types. First, the histogram of CT values in pre-extracted lung regions is modeled by a Gaussian mixture model and a threshold value for including high-attenuation regions is computed. Second, after setting up a region of interest around the nodule seed point, foreground regions are extracted by using the threshold and quick-shift-based mode seeking. Finally, for separating vessels from the nodule, a vessel-likelihood map derived from elongatedness of foreground regions is computed, and a region growing scheme starting from the seed point is applied to the map with the aid of fast marching method. Experimental results using an anthropomorphic chest phantom showed that our method yielded generally lower volumetric measurement errors for both solid and GGO nodules compared with other methods reported in preceding studies conducted using similar technical settings. Also, our method allowed reasonable segmentation of GGO nodules in low-dose images and could be applied to clinical CT images including part-solid nodules.

  1. Comparison of different tissue clearing methods and 3D imaging techniques for visualization of GFP-expressing mouse embryos and embryonic hearts.

    PubMed

    Kolesová, Hana; Čapek, Martin; Radochová, Barbora; Janáček, Jiří; Sedmera, David

    2016-08-01

    Our goal was to find an optimal tissue clearing protocol for whole-mount imaging of embryonic and adult hearts and whole embryos of transgenic mice that would preserve green fluorescent protein GFP fluorescence and permit comparison of different currently available 3D imaging modalities. We tested various published organic solvent- or water-based clearing protocols intended to preserve GFP fluorescence in central nervous system: tetrahydrofuran dehydration and dibenzylether protocol (DBE), SCALE, CLARITY, and CUBIC and evaluated their ability to render hearts and whole embryos transparent. DBE clearing protocol did not preserve GFP fluorescence; in addition, DBE caused considerable tissue-shrinking artifacts compared to the gold standard BABB protocol. The CLARITY method considerably improved tissue transparency at later stages, but also decreased GFP fluorescence intensity. The SCALE clearing resulted in sufficient tissue transparency up to ED12.5; at later stages the useful depth of imaging was limited by tissue light scattering. The best method for the cardiac specimens proved to be the CUBIC protocol, which preserved GFP fluorescence well, and cleared the specimens sufficiently even at the adult stages. In addition, CUBIC decolorized the blood and myocardium by removing tissue iron. Good 3D renderings of whole fetal hearts and embryos were obtained with optical projection tomography and selective plane illumination microscopy, although at resolutions lower than with a confocal microscope. Comparison of five tissue clearing protocols and three imaging methods for study of GFP mouse embryos and hearts shows that the optimal method depends on stage and level of detail required.

  2. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    PubMed

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Edge wrinkling of a soft ridge with gradient thickness

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Shao, Zhi-Chun; Li, Guo-Yang; Zheng, Yang; Zhang, Wan-Yu; Li, Bo; Cao, Yanping; Feng, Xi-Qiao

    2017-06-01

    We investigate the edge wrinkling of a soft ridge with gradient thickness under axial compression. Our experiments show that the wrinkling wavelength undergoes a considerable increase with increasing load. Simple scaling laws are derived based on an upper-bound analysis to predict the critical buckling conditions and the evolution of wrinkling wavelength during the post-buckling stage, and the results show good accordance with our finite element simulations and experiments. We also report a pattern transformation triggered by the edge wrinkling of soft ridge arrays. The results and method not only help understand the correlation between the growth and form observed in some natural systems but also inspire a strategy to fabricate advanced functional surfaces.

  4. Radiosity diffusion model in 3D

    NASA Astrophysics Data System (ADS)

    Riley, Jason D.; Arridge, Simon R.; Chrysanthou, Yiorgos; Dehghani, Hamid; Hillman, Elizabeth M. C.; Schweiger, Martin

    2001-11-01

    We present the Radiosity-Diffusion model in three dimensions(3D), as an extension to previous work in 2D. It is a method for handling non-scattering spaces in optically participating media. We present the extension of the model to 3D including an extension to the model to cope with increased complexity of the 3D domain. We show that in 3D more careful consideration must be given to the issues of meshing and visibility to model the transport of light within reasonable computational bounds. We demonstrate the model to be comparable to Monte-Carlo simulations for selected geometries, and show preliminary results of comparisons to measured time-resolved data acquired on resin phantoms.

  5. The boron implantation in the varied zone MBE MCT epilayer

    NASA Astrophysics Data System (ADS)

    Voitsekhovskii, Alexander V.; Grigor'ev, Denis V.; Kokhanenko, Andrey P.; Korotaev, Alexander G.; Sidorov, Yuriy G.; Varavin, Vasiliy S.; Dvoretsky, Sergey A.; Mikhailov, Nicolay N.; Talipov, Niyaz Kh.

    2005-09-01

    In the paper experimental results on boron implantation of the CdxHg1-xTe epilayers with various composition near surface of the material are discussed. The electron concentration in the surface layer after irradiation vs irradiation dose and ion energy are investigated for range of doses 1011 - 3•1015 cm-2 and energies of 20 - 150 keV. Also the results of the electrical active defects distribution measurement, carried out by differential Hall method, after boron implantation are represented. Consideration of the received data shows, that composition gradient influence mainly on the various dynamics of accumulation of electric active radiation defects. The electric active defects distribution analysis shows, that the other factors are negligible.

  6. Impact tensile testing of wires

    NASA Technical Reports Server (NTRS)

    Dawson, T. H.

    1976-01-01

    The test consists of fixing one end of a wire specimen and allowing a threaded falling weight to strike the other. Assuming the dynamic stress in the wire to be a function only of its strain, energy considerations show for negligible wire inertia effects that the governing dynamic stress-strain law can be determined directly from impact energy vs. wire elongation data. Theoretical calculations are presented which show negligible wire inertia effects for ratios of wire mass to striking mass of the order of .01 or less. The test method is applied to soft copper wires and the dynamic stress-strain curve so determined is found to be about 30 percent higher than the corresponding static curve.

  7. Formal methods and digital systems validation for airborne systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992.

  8. Nucleic acids-based tools for ballast water surveillance, monitoring, and research

    NASA Astrophysics Data System (ADS)

    Darling, John A.; Frederick, Raymond M.

    2018-03-01

    Understanding the risks of biological invasion posed by ballast water-whether in the context of compliance testing, routine monitoring, or basic research-is fundamentally an exercise in biodiversity assessment, and as such should take advantage of the best tools available for tackling that problem. The past several decades have seen growing application of genetic methods for the study of biodiversity, driven in large part by dramatic technological advances in nucleic acids analysis. Monitoring approaches based on such methods have the potential to increase dramatically sampling throughput for biodiversity assessments, and to improve on the sensitivity, specificity, and taxonomic accuracy of traditional approaches. The application of targeted detection tools (largely focused on PCR but increasingly incorporating novel probe-based methodologies) has led to a paradigm shift in rare species monitoring, and such tools have already been applied for early detection in the context of ballast water surveillance. Rapid improvements in community profiling approaches based on high throughput sequencing (HTS) could similarly impact broader efforts to catalogue biodiversity present in ballast tanks, and could provide novel opportunities to better understand the risks of biotic exchange posed by ballast water transport-and the effectiveness of attempts to mitigate those risks. These various approaches still face considerable challenges to effective implementation, depending on particular management or research needs. Compliance testing, for instance, remains dependent on accurate quantification of viable target organisms; while tools based on RNA detection show promise in this context, the demands of such testing require considerable additional investment in methods development. In general surveillance and research contexts, both targeted and community-based approaches are still limited by various factors: quantification remains a challenge (especially for taxa in larger size classes), gaps in nucleic acids reference databases are still considerable, uncertainties in taxonomic assignment methods persist, and many applications have not yet matured sufficiently to offer standardized methods capable of meeting rigorous quality assurance standards. Nevertheless, the potential value of these tools, their growing utilization in biodiversity monitoring, and the rapid methodological advances over the past decade all suggest that they should be seriously considered for inclusion in the ballast water surveillance toolkit.

  9. Automated Simplification of Full Chemical Mechanisms

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    1997-01-01

    A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.

  10. Forecasting Construction Cost Index based on visibility graph: A network approach

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong

    2018-03-01

    Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.

  11. Disciplining Bioethics: Towards a Standard of Methodological Rigor in Bioethics Research

    PubMed Central

    Adler, Daniel; Shaul, Randi Zlotnik

    2012-01-01

    Contemporary bioethics research is often described as multi- or interdisciplinary. Disciplines are characterized, in part, by their methods. Thus, when bioethics research draws on a variety of methods, it crosses disciplinary boundaries. Yet each discipline has its own standard of rigor—so when multiple disciplinary perspectives are considered, what constitutes rigor? This question has received inadequate attention, as there is considerable disagreement regarding the disciplinary status of bioethics. This disagreement has presented five challenges to bioethics research. Addressing them requires consideration of the main types of cross-disciplinary research, and consideration of proposals aiming to ensure rigor in bioethics research. PMID:22686634

  12. Ensemble variant interpretation methods to predict enzyme activity and assign pathogenicity in the CAGI4 NAGLU (Human N-acetyl-glucosaminidase) and UBE2I (Human SUMO-ligase) challenges.

    PubMed

    Yin, Yizhou; Kundu, Kunal; Pal, Lipika R; Moult, John

    2017-09-01

    CAGI (Critical Assessment of Genome Interpretation) conducts community experiments to determine the state of the art in relating genotype to phenotype. Here, we report results obtained using newly developed ensemble methods to address two CAGI4 challenges: enzyme activity for population missense variants found in NAGLU (Human N-acetyl-glucosaminidase) and random missense mutations in Human UBE2I (Human SUMO E2 ligase), assayed in a high-throughput competitive yeast complementation procedure. The ensemble methods are effective, ranked second for SUMO-ligase and third for NAGLU, according to the CAGI independent assessors. However, in common with other methods used in CAGI, there are large discrepancies between predicted and experimental activities for a subset of variants. Analysis of the structural context provides some insight into these. Post-challenge analysis shows that the ensemble methods are also effective at assigning pathogenicity for the NAGLU variants. In the clinic, providing an estimate of the reliability of pathogenic assignments is the key. We have also used the NAGLU dataset to show that ensemble methods have considerable potential for this task, and are already reliable enough for use with a subset of mutations. © 2017 Wiley Periodicals, Inc.

  13. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  14. Analysis of Waveform Retracking Methods in Antarctic Ice Sheet Based on CRYOSAT-2 Data

    NASA Astrophysics Data System (ADS)

    Xiao, F.; Li, F.; Zhang, S.; Hao, W.; Yuan, L.; Zhu, T.; Zhang, Y.; Zhu, C.

    2017-09-01

    Satellite altimetry plays an important role in many geoscientific and environmental studies of Antarctic ice sheet. The ranging accuracy is degenerated near coasts or over nonocean surfaces, due to waveform contamination. A postprocess technique, known as waveform retracking, can be used to retrack the corrupt waveform and in turn improve the ranging accuracy. In 2010, the CryoSat-2 satellite was launched with the Synthetic aperture Interferometric Radar ALtimeter (SIRAL) onboard. Satellite altimetry waveform retracking methods are discussed in the paper. Six retracking methods including the OCOG method, the threshold method with 10 %, 25 % and 50 % threshold level, the linear and exponential 5-β parametric methods are used to retrack CryoSat-2 waveform over the transect from Zhongshan Station to Dome A. The results show that the threshold retracker performs best with the consideration of waveform retracking success rate and RMS of retracking distance corrections. The linear 5-β parametric retracker gives best waveform retracking precision, but cannot make full use of the waveform data.

  15. A novel synthesis of a new thorium (IV) metal organic framework nanostructure with well controllable procedure through ultrasound assisted reverse micelle method.

    PubMed

    Sargazi, Ghasem; Afzali, Daryoush; Mostafavi, Ali

    2018-03-01

    Reverse micelle (RM) and ultrasound assisted reverse micelle (UARM) were applied to the synthesis of novel thorium nanostructures as metal organic frameworks (MOFs). Characterization with different techniques showed that the Th-MOF sample synthesized by UARM method had higher thermal stability (354°C), smaller mean particle size (27nm), and larger surface area (2.02×10 3 m 2 /g). Besides, in this novel approach, the nucleation of crystals was found to carry out in a shorter time. The synthesis parameters of UARM method were designed by 2 k-1 factorial and the process control was systematically studied using analysis of variance (ANOVA) and response surface methodology (RSM). ANOVA showed that various factors, including surfactant content, ultrasound duration, temperature, ultrasound power, and interaction between these factors, considerably affected different properties of the Th-MOF samples. According to the 2 k-1 factorial design, the determination coefficient (R 2 ) of the model is 0.999, with no significant lack of fit. The F value of 5432, implied that the model was highly significant and adequate to represent the relationship between the responses and the independent variables, also the large R-adjusted value indicates a good relationship between the experimental data and the fitted model. RSM predicted that it would be possible to produce Th-MOF samples with the thermal stability of 407°C, mean particle size of 13nm, and surface area of 2.20×10 3 m 2 /g. The mechanism controlling the Th-MOF properties was considerably different from the conventional mechanisms. Moreover, the MOF sample synthesized using UARM exhibited higher capacity for nitrogen adsorption as a result of larger pore sizes. It is believed that the UARM method and systematic studies developed in the present work can be considered as a new strategy for their application in other nanoscale MOF samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Picture This... Safety, Dignity, and Voice-Ethical Research with Children: Practical Considerations for the Reflexive Researcher

    ERIC Educational Resources Information Center

    Phelan, Shanon K.; Kinsella, Elizabeth Anne

    2013-01-01

    While engaged in a research project involving the use of visual methods with children, the authors discovered that there are many ethical considerations beyond what could have been predicted at the outset. Some of these considerations are important with respect to research with children in general, while others arise more particularly when using…

  17. Considerations for using data envelopment analysis for the assessment of radiotherapy treatment plan quality.

    PubMed

    Simpson, John; Raith, Andrea; Rouse, Paul; Ehrgott, Matthias

    2017-10-09

    Purpose The operations research method of data envelopment analysis (DEA) shows promise for assessing radiotherapy treatment plan quality. The purpose of this paper is to consider the technical requirements for using DEA for plan assessment. Design/methodology/approach In total, 41 prostate treatment plans were retrospectively analysed using the DEA method. The authors investigate the impact of DEA weight restrictions with reference to the ability to differentiate plan performance at a level of clinical significance. Patient geometry influences plan quality and the authors compare differing approaches for managing patient geometry within the DEA method. Findings The input-oriented DEA method is the method of choice when performing plan analysis using the key undesirable plan metrics as the DEA inputs. When considering multiple inputs, it is necessary to constrain the DEA input weights in order to identify potential plan improvements at a level of clinical significance. All tested approaches for the consideration of patient geometry yielded consistent results. Research limitations/implications This work is based on prostate plans and individual recommendations would therefore need to be validated for other treatment sites. Notwithstanding, the method that requires both optimised DEA weights according to clinical significance and appropriate accounting for patient geometric factors is universally applicable. Practical implications DEA can potentially be used during treatment plan development to guide the planning process or alternatively used retrospectively for treatment plan quality audit. Social implications DEA is independent of the planning system platform and therefore has the potential to be used for multi-institutional quality audit. Originality/value To the authors' knowledge, this is the first published examination of the optimal approach in the use of DEA for radiotherapy treatment plan assessment.

  18. Accurate diagnosis of thyroid follicular lesions from nuclear morphology using supervised learning.

    PubMed

    Ozolek, John A; Tosun, Akif Burak; Wang, Wei; Chen, Cheng; Kolouri, Soheil; Basu, Saurav; Huang, Hu; Rohde, Gustavo K

    2014-07-01

    Follicular lesions of the thyroid remain significant diagnostic challenges in surgical pathology and cytology. The diagnosis often requires considerable resources and ancillary tests including immunohistochemistry, molecular studies, and expert consultation. Visual analyses of nuclear morphological features, generally speaking, have not been helpful in distinguishing this group of lesions. Here we describe a method for distinguishing between follicular lesions of the thyroid based on nuclear morphology. The method utilizes an optimal transport-based linear embedding for segmented nuclei, together with an adaptation of existing classification methods. We show the method outputs assignments (classification results) which are near perfectly correlated with the clinical diagnosis of several lesion types' lesions utilizing a database of 94 patients in total. Experimental comparisons also show the new method can significantly outperform standard numerical feature-type methods in terms of agreement with the clinical diagnosis gold standard. In addition, the new method could potentially be used to derive insights into biologically meaningful nuclear morphology differences in these lesions. Our methods could be incorporated into a tool for pathologists to aid in distinguishing between follicular lesions of the thyroid. In addition, these results could potentially provide nuclear morphological correlates of biological behavior and reduce health care costs by decreasing histotechnician and pathologist time and obviating the need for ancillary testing. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. A mobile asset sharing policy for hospitals with real time locating systems.

    PubMed

    Demircan-Yıldız, Ece Arzu; Fescioglu-Unver, Nilgun

    2016-01-01

    Each year, hospitals lose a considerable amount of time and money due to misplaced mobile assets. In addition the assets which remain in departments that frequently use them depreciate early, while other assets of the same type in different departments are rarely used. A real time locating system can prevent these losses when used with appropriate asset sharing policies. This research quantifies the amount of time a medium size hospital saves by using real time locating system and proposes an asset selection rule to eliminate the asset usage imbalance problem. The asset selection rule proposed is based on multi objective optimization techniques. The effectiveness of this rule on asset to patient time and asset utilization rate variance performance measures were tested using discrete event simulation method. Results show that the proposed asset selection rule improved the usage balance significantly. Sensitivity analysis showed that the proposed rule is robust to changes in demand rates and user preferences. Real time locating systems enable saving considerable amount of time in hospitals, and they can still be improved by integrating decision support mechanisms. Combining tracking technology and asset selection rules helps improve healthcare services.

  20. Test plan : I-40 TTIS focus groups and personal interview

    DOT National Transportation Integrated Search

    1976-04-01

    This provides specific design recommendations, design considerations, and construction techniques for the construction of lateral support systems and underpinning. The design considerations are presented for each technique or method (solider piles, s...

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behnke, M. R.; Bellei, T.A.; Bloethe, W.G.

    This paper presents a summary of the most impor- tant considerations for wind power plant collection system un- derground and overhead cable designs. Various considerations, including conductor selection, soil thermal properties, installa- tion methods, splicing, concentric grounding, and NESC/NEC requirements are discussed.

  2. Modal ring method for the scattering of sound

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.

  3. A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients

    PubMed Central

    Koyuncu, Sevinc; Haggblom, Per

    2009-01-01

    Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis) was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV) and the international standard method (EN ISO 6579:2002). Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW) were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn), performed less well due to many false-negative results on Brilliant Green agar (BGA) plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably. PMID:19192298

  4. Running accuracy analysis of a 3-RRR parallel kinematic machine considering the deformations of the links

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Jiang, Yao; Li, Tiemin

    2014-09-01

    Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.

  5. Validating the operational bias and hypothesis of universal exponent in landslide frequency-area distribution.

    PubMed

    Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung

    2014-01-01

    The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes.

  6. Comparing protein and energy status of winter-fed white-tailed deer

    USGS Publications Warehouse

    Page, B.D.; Underwood, H.B.

    2006-01-01

    Although nutritional status in response to controlled feeding trials has been extensively studied in captive white-tailed deer (Odocoileus virginianus), there remains a considerable gap in understanding the influence of variable supplemental feeding protocols on free-ranging deer. Consequently, across the northern portion of the white-tailed deer range, numerous property managers are investing substantial resources into winter supplemental-feeding programs without adequate tools to assess the nutritional status of their populations. We studied the influence of a supplemental winter feeding gradient on the protein and energy status of free-ranging white-tailed deer in the Adirondack Mountains of New York. We collected blood and fecal samples from 31 captured fawns across 3 sites that varied considerably in the frequency, quantity, and method of supplemental feed distribution. To facilitate population-wide comparisons, we collected fresh fecal samples off the snow at each of the 3 sites with supplemental feeding and 1 reference site where no feeding occurred. Results indicated that the method of feed distribution, in addition to quantity and frequency, can affect the nutritional status of deer. The least intensively fed population showed considerable overlap in diet quality with the unfed population in a principal components ordination, despite the substantial time and financial resources invested in the feeding program. Data from fecal samples generally denoted a gradient in diet quality and digestibility that corresponded with the availability of supplements. Our results further demonstrated that fecal nitrogen and fecal fiber, indices of dietary protein and digestibility, can be estimated using regressions of fecal pellet mass, enabling a rapid qualitative assessment of diet quality.

  7. Commercial Applications of Metal Foams: Their Properties and Production

    PubMed Central

    García-Moreno, Francisco

    2016-01-01

    This work gives an overview of the production, properties and industrial applications of metal foams. First, it classifies the most relevant manufacturing routes and methods. Then, it reviews the most important properties, with special interest in the mechanical and functional aspects, but also taking into account costs and feasibility considerations. These properties are the motivation and basis of related applications. Finally, a summary of the most relevant applications showing a large number of actual examples is presented. Concluding, we can forecast a slow, but continuous growth of this industrial sector. PMID:28787887

  8. Proceedings: Demilitarization and Disposal Technology Conference (2nd) Held at Salt Lake City, Utah on April 24, 25, 26, 1979,

    DTIC Science & Technology

    1979-04-01

    AAP contains a wet scrubber system. The scrubber is a combination spray chamber/ venturi / marble bed unit capable of attaining a 21" WG pressure drop...requirements until the feed rates are reduced considerably. Water quality data from the scrubber show that the heavy metals and low pH to be the major water...demilitarized using this method. The process water, scrubber water, and all clean-up water are treated by a water treatment system. This treatment

  9. Tunneling calculations for GaAs-Al(x)Ga(1-x) as graded band-gap sawtooth superlattices. Thesis

    NASA Technical Reports Server (NTRS)

    Forrest, Kathrine A.; Meijer, Paul H. E.

    1991-01-01

    Quantum mechanical tunneling calculations for sawtooth (linearly graded band-gap) and step-barrier AlGaAs superlattices were performed by means of a transfer matrix method, within the effective mass approximation. The transmission coefficient and tunneling current versus applied voltage were computed for several representative structures. Particular consideration was given to effective mass variations. The tunneling properties of step and sawtooth superlattices show some qualitative similarities. Both structures exhibit resonant tunneling, however, because they deform differently under applied fields, the J-V curves differ.

  10. Probing dynamical symmetry breaking using quantum-entangled photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hao; Piryatinski, Andrei; Jerke, Jonathan

    Here, we present an input/output analysis of photon-correlation experiments whereby a quantum mechanically entangled bi-photon state interacts with a material sample placed in one arm of a Hong–Ou–Mandel apparatus. We show that the output signal contains detailed information about subsequent entanglement with the microscopic quantum states in the sample. In particular, we apply the method to an ensemble of emitters interacting with a common photon mode within the open-system Dicke model. Our results indicate considerable dynamical information concerning spontaneous symmetry breaking can be revealed with such an experimental system.

  11. Thermal neutron scintillators using unenriched boron nitride and zinc sulfide

    NASA Astrophysics Data System (ADS)

    McMillan, J. E.; Cole, A. J.; Kirby, A.; Marsden, E.

    2015-06-01

    Thermal neutron detectors based on powdered zinc sulfide intimately mixed with a neutron capture compound have a history as long as scintillation technique itself. We show that using unenriched boron nitride powder, rather than the more commonly used enriched lithium fluoride, results in detection screens which produce less light but which are very considerably cheaper. Methods of fabricating large areas of this material are presented. The screens are intended for the production of large area low cost neutron detectors as a replacement for helium-3 proportional tubes.

  12. Probing dynamical symmetry breaking using quantum-entangled photons

    DOE PAGES

    Li, Hao; Piryatinski, Andrei; Jerke, Jonathan; ...

    2017-11-15

    Here, we present an input/output analysis of photon-correlation experiments whereby a quantum mechanically entangled bi-photon state interacts with a material sample placed in one arm of a Hong–Ou–Mandel apparatus. We show that the output signal contains detailed information about subsequent entanglement with the microscopic quantum states in the sample. In particular, we apply the method to an ensemble of emitters interacting with a common photon mode within the open-system Dicke model. Our results indicate considerable dynamical information concerning spontaneous symmetry breaking can be revealed with such an experimental system.

  13. Cooperativity and specificity of association of a designed transmembrane peptide.

    PubMed Central

    Gratkowski, Holly; Dai, Qing-Hong; Wand, A Joshua; DeGrado, William F; Lear, James D

    2002-01-01

    Thermodynamics studies aimed at quantitatively characterizing free energy effects of amino acid substitutions are not restricted to two state systems, but do require knowing the number of states involved in the equilibrium under consideration. Using analytical ultracentrifugation and NMR methods, we show here that a membrane-soluble peptide, MS1, designed by modifying the sequence of the water-soluble coiled-coil GCN4-P1, exhibits a reversible monomer-dimer-trimer association in detergent micelles with a greater degree of cooperativity in C14-betaine than in dodecyl phosphocholine detergents. PMID:12202385

  14. Touching the theoretical capacity: synthesizing cubic LiTi2(PO4)3/C nanocomposites for high-performance lithium-ion battery.

    PubMed

    Deng, Wenjun; Wang, Xusheng; Liu, Chunyi; Li, Chang; Xue, Mianqi; Li, Rui; Pan, Feng

    2018-04-05

    A cubic LiTi2(PO4)3/C composite is successfully prepared via a simple solvothermal method and further glucose-pyrolysis treatment. The as-fabricated LTP/C material delivers an ultra-high reversible capacity of 144 mA h g-1 at 0.2C rate, which is the highest ever reported, and shows considerable performance improvement compared with before. Combining this with the stable cycling performance and high rate capability, such material has a promising future in practical application.

  15. Numerical parametric studies of spray combustion instability

    NASA Technical Reports Server (NTRS)

    Pindera, M. Z.

    1993-01-01

    A coupled numerical algorithm has been developed for studies of combustion instabilities in spray-driven liquid rocket engines. The model couples gas and liquid phase physics using the method of fractional steps. Also introduced is a novel, efficient methodology for accounting for spray formation through direct solution of liquid phase equations. Preliminary parametric studies show marked sensitivity of spray penetration and geometry to droplet diameter, considerations of liquid core, and acoustic interactions. Less sensitivity was shown to the combustion model type although more rigorous (multi-step) formulations may be needed for the differences to become apparent.

  16. Orbit of Comet C/1850 Q1 (Bond)

    NASA Astrophysics Data System (ADS)

    Branham, Richard L., Jr.

    Comet C/1850 Q1 (Bond) is one of a number of comets catalogued with parabolic orbits. Given that there are sufficient observations, 104in right ascension and 103in declination, it proves possible to calculate a better orbit. Some of the difficulties of working with 19th century observations, which show considerable scatter, are discussed. Rectangular coordinates, both of the comet and the Sun, are interpolated by a recursive version of Aitken's method, rendering unnecessary the need to specify an order for the interpolation. Comet Bond's orbit is slightly hyperbolic.

  17. Sorting protein decoys by machine-learning-to-rank

    PubMed Central

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-01-01

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967

  18. Sorting protein decoys by machine-learning-to-rank.

    PubMed

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-08-17

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.

  19. A discussion of differences in preparation, performance and postreflections in participant observations within two grounded theory approaches.

    PubMed

    Berthelsen, Connie Bøttcher; Lindhardt, Tove; Frederiksen, Kirsten

    2017-06-01

    This paper presents a discussion of the differences in using participant observation as a data collection method by comparing the classic grounded theory methodology of Barney Glaser with the constructivist grounded theory methodology by Kathy Charmaz. Participant observations allow nursing researchers to experience activities and interactions directly in situ. However, using participant observations as a data collection method can be done in many ways, depending on the chosen grounded theory methodology, and may produce different results. This discussion shows that how the differences between using participant observations in classic and constructivist grounded theory can be considerable and that grounded theory researchers should adhere to the method descriptions of performing participant observations according to the selected grounded theory methodology to enhance the quality of research. © 2016 Nordic College of Caring Science.

  20. Performance evaluation method of electric energy data acquire system based on combination of subjective and objective weights

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Ding, Zhongan; Deng, Bofa; Yan, Shengteng

    2017-10-01

    According to the characteristics of electric energy data acquire system (EEDAS), considering the availability of each index data and the connection between the index integrity, establishing the performance evaluation index system of electric energy data acquire system from three aspects as master station system, communication channel, terminal equipment. To determine the comprehensive weight of each index based on triangular fuzzy number analytic hierarchy process with entropy weight method, and both subjective preference and objective attribute are taken into consideration, thus realize the performance comprehensive evaluation more reasonable and reliable. Example analysis shows that, by combination with analytic hierarchy process (AHP) and triangle fuzzy numbers (TFN) to establish comprehensive index evaluation system based on entropy method, the evaluation results not only convenient and practical, but also more objective and accurate.

  1. A hybrid method for X-ray optics simulation: combining geometric ray-tracing and wavefront propagation

    DOE PAGES

    Shi, Xianbo; Reininger, Ruben; Sanchez del Rio, Manuel; ...

    2014-05-15

    A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The 'Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared withSHADOWresults pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version ofSRWin one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the codemore » is considerably faster than the multi-electron version ofSRWand is therefore a useful tool for beamline design and optimization.« less

  2. Energy harvesting from sea waves with consideration of airy and JONSWAP theory and optimization of energy harvester parameters

    NASA Astrophysics Data System (ADS)

    Mirab, Hadi; Fathi, Reza; Jahangiri, Vahid; Ettefagh, Mir Mohammad; Hassannejad, Reza

    2015-12-01

    One of the new methods for powering low-power electronic devices at sea is a wave energy harvesting system. In this method, piezoelectric material is employed to convert the mechanical energy of sea waves into electrical energy. The advantage of this method is based on avoiding a battery charging system. Studies have been done on energy harvesting from sea waves, however, considering energy harvesting with random JONSWAP wave theory, then determining the optimum values of energy harvested is new. This paper does that by implementing the JONSWAP wave model, calculating produced power, and realistically showing that output power is decreased in comparison with the more simple airy wave model. In addition, parameters of the energy harvester system are optimized using a simulated annealing algorithm, yielding increased produced power.

  3. Palm Vein Verification Using Multiple Features and Locality Preserving Projections

    PubMed Central

    Bu, Wei; Wu, Xiangqian; Zhao, Qiushi

    2014-01-01

    Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%. PMID:24693230

  4. Palm vein verification using multiple features and locality preserving projections.

    PubMed

    Al-Juboori, Ali Mohsin; Bu, Wei; Wu, Xiangqian; Zhao, Qiushi

    2014-01-01

    Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%.

  5. Flexible operation strategy for environment control system in abnormal supply power condition

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Guoxiang, Li; Hongquan, Qu; Yufeng, Fang

    2017-04-01

    This paper establishes an optimization method that can be applied to the flexible operation of the environment control system in an abnormal supply power condition. A proposed conception of lifespan is used to evaluate the depletion time of the non-regenerative substance. The optimization objective function is to maximize the lifespans. The optimization variables are the allocated powers of subsystems. The improved Non-dominated Sorting Genetic Algorithm is adopted to obtain the pareto optimization frontier with the constraints of the cabin environmental parameters and the adjustable operating parameters of the subsystems. Based on the same importance of objective functions, the preferred power allocation of subsystems can be optimized. Then the corresponding running parameters of subsystems can be determined to ensure the maximum lifespans. A long-duration space station with three astronauts is used to show the implementation of the proposed optimization method. Three different CO2 partial pressure levels are taken into consideration in this study. The optimization results show that the proposed optimization method can obtain the preferred power allocation for the subsystems when the supply power is at a less-than-nominal value. The method can be applied to the autonomous control for the emergency response of the environment control system.

  6. Quantitative reconstruction of refractive index distribution and imaging of glucose concentration by using diffusing light.

    PubMed

    Liang, Xiaoping; Zhang, Qizhi; Jiang, Huabei

    2006-11-10

    We show that a two-step reconstruction method can be adapted to improve the quantitative accuracy of the refractive index reconstruction in phase-contrast diffuse optical tomography (PCDOT). We also describe the possibility of imaging tissue glucose concentration with PCDOT. In this two-step method, we first use our existing finite-element reconstruction algorithm to recover the position and shape of a target. We then use the position and size of the target as a priori information to reconstruct a single value of the refractive index within the target and background regions using a region reconstruction method. Due to the extremely low contrast available in the refractive index reconstruction, we incorporate a data normalization scheme into the two-step reconstruction to combat the associated low signal-to-noise ratio. Through a series of phantom experiments we find that this two-step reconstruction method can considerably improve the quantitative accuracy of the refractive index reconstruction. The results show that the relative error of the reconstructed refractive index is reduced from 20% to within 1.5%. We also demonstrate the possibility of PCDOT for recovering glucose concentration using these phantom experiments.

  7. A novel method for harmless disposal and resource reutilization of steel wire rope sludges.

    PubMed

    Zhang, Li; Liu, Yang-Sheng

    2016-10-01

    Rapid development of steel wire rope industry has led to the generation of large quantities of pickling sludge, which causes significant ecological problems and considerable negative environmental effects. In this study, a novel method was proposed for harmless disposal and resource reutilization of the steel wire rope sludge. Based on the method, two steel wire rope sludges (the Pb sludge and the Zn sludge) were firstly extracted by hydrochloric or sulfuric acid and then mixed with the hydrochloride acid extracting solution of aluminum skimmings to produce composite polyaluminum ferric flocculants. The optimum conditions (acid concentration, w/v ratio, reaction time, and reaction temperature) for acid extraction of the sludges were studied. Results showed that 97.03 % of Pb sludge and 96.20 % of Zn sludge were extracted. Leaching potential of the residues after acid extraction was evaluated, and a proposed treatment for the residues had been instructed. The obtained flocculant products were used to purify the real domestic wastewater and showed an equivalent or better performance than the commercial ones. This method is environmental-friendly and cost-effective when compared with the conventional sludge treatments.

  8. APPLICATIONS OF BOREHOLE-ACOUSTIC METHODS IN ROCK MECHANICS.

    USGS Publications Warehouse

    Paillet, Frederick L.

    1985-01-01

    Acoustic-logging methods using a considerable range of wavelengths and frequencies have proven very useful in the in situ characterization of deeply buried crystalline rocks. Seismic velocities are useful in investigating the moduli of unfractured rock, and in producing a continuous record of rock quality for comparison with discontinuous intervals of core. The considerable range of frequencies makes the investigation of scale effects possible in both fractured and unfractured rock. Several specific methods for the characterization of in situ permeability have been developed and verified in the field.

  9. Diagnosis of coccidioidomycosis by culture: safety considerations, traditional methods, and susceptibility testing.

    PubMed

    Sutton, Deanna A

    2007-09-01

    The recovery of Coccidioides spp. by culture and confirmation utilizing the AccuProbe nucleic acid hybridization method by GenProbe remain the definitive diagnostic method. Biosafety considerations from specimen collection through culture confirmation in the mycology laboratory are critical, as acquisition of coccidioidomycosis by laboratory workers is well documented. The designation of Coccidioides spp. as select agents of potential bioterrorism has mandated strict regulation of their transport and inventory. The genus appears generally susceptible, in vitro, although no defined breakpoints exist. Susceptibility testing may assist in documenting treatment failures.

  10. Quantifying and Understanding Effects from Wildlife, Radar, and Public Engagement on Future Wind Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tegen, Suzanne

    This presentation provides an overview of findings from a report published in 2016 by researchers at the National Renewable Energy Laboratory, An Initial Evaluation of Siting Considerations on Current and Future Wind Deployment. The presentation covers the background for research, the Energy Department's Wind Vision, research methods, siting considerations, the wind project deployment process, and costs associated with siting considerations.

  11. The Fluorescent-Oil Film Method and Other Techniques for Boundary-Layer Flow Visualization

    NASA Technical Reports Server (NTRS)

    Loving, Donald L.; Katzoff, S.

    1959-01-01

    A flow-visualization technique, known as the fluorescent-oil film method, has been developed which appears to be generally simpler and to require less experience and development of technique than previously published methods. The method is especially adapted to use in the large high-powered wind tunnels which require considerable time to reach the desired test conditions. The method consists of smearing a film of fluorescent oil over a surface and observing where the thickness is affected by the shearing action of the boundary layer. These films are detected and identified, and their relative thicknesses are determined by use of ultraviolet light. Examples are given of the use of this technique. Other methods that show promise in the study of boundary-layer conditions are described. These methods include the use of a temperature-sensitive fluorescent paint and the use of a radiometer that is sensitive to the heat radiation from a surface. Some attention is also given to methods that can be used with a spray apparatus in front of the test model.

  12. Nanocrystalline hydroxyapatite enriched in selenite and manganese ions: physicochemical and antibacterial properties

    NASA Astrophysics Data System (ADS)

    Kolmas, Joanna; Groszyk, Ewa; Piotrowska, Urszula

    2015-07-01

    In this work, we used the co-precipitation method to synthesize hydroxyapatite (Mn-SeO3-HA) containing both selenium IV (approximately 3.60 wt.%) and manganese II (approximately 0.29 wt.%). Pure hydroxyapatite (HA), hydroxyapatite-containing manganese (II) ions (Mn-HA), and hydroxyapatite-containing selenite ions alone (SeO3-HA), prepared with the same method, were used as reference materials. The structures and physicochemical properties of all the obtained samples were investigated. PXRD studies showed that the obtained materials were homogeneous and consisted of apatite phase. Introducing selenites into the hydroxyapatite crystals considerably affects the size and degree of ordering. Experiments with transmission electron microscopy (TEM) showed that Mn-SeO3-HA crystals are very small, needle-like, and tend to form agglomerates. Fourier transform infrared spectroscopy (FT-IR) and solid-state nuclear magnetic resonance (ssNMR) were used to analyze the structure of the obtained material. Preliminary microbiological tests showed that the material demonstrated antibacterial activity against Staphylococcus aureus, yet such properties were not confirmed regarding Escherichia coli. PACS codes: 61, 76, 81

  13. Noise Reduction Design of the Volute for a Centrifugal Compressor

    NASA Astrophysics Data System (ADS)

    Song, Zhen; Wen, Huabing; Hong, Liangxing; Jin, Yudong

    2017-08-01

    In order to effectively control the aerodynamic noise of a compressor, this paper takes into consideration a marine exhaust turbocharger compressor as a research object. According to the different design concept of volute section, tongue and exit cone, six different volute models were established. The finite volume method is used to calculate the flow field, whiles the finite element method is used for the acoustic calculation. Comparison and analysis of different structure designs from three aspects: noise level, isentropic efficiency and Static pressure recovery coefficient. The results showed that under the concept of volute section model 1 yielded the best result, under the concept of tongue analysis model 3 yielded the best result and finally under exit cone analysis model 6 yielded the best results.

  14. Stator and Rotor Flux Based Deadbeat Direct Torque Control of Induction Machines

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2001-01-01

    A new, deadbeat type of direct torque control is proposed, analyzed, and experimentally verified in this paper. The control is based on stator and rotor flux as state variables. This choice of state variables allows a graphical representation which is transparent and insightful. The graphical solution shows the effects of realistic considerations such as voltage and current limits. A position and speed sensorless implementation of the control, based on the self-sensing signal injection technique, is also demonstrated experimentally for low speed operation. The paper first develops the new, deadbeat DTC methodology and graphical representation of the new algorithm. It then evaluates feasibility via simulation and experimentally demonstrates performance of the new method with a laboratory prototype including the sensorless methods.

  15. Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan

    2017-03-01

    Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.

  16. The Global Optimization of Pt13 Cluster Using the First-Principle Molecular Dynamics with the Quenching Technique

    NASA Astrophysics Data System (ADS)

    Chen, Xiangping; Duan, Haiming; Cao, Biaobing; Long, Mengqiu

    2018-03-01

    The high-temperature first-principle molecular dynamics method used to obtain the low energy configurations of clusters [L. L. Wang and D. D. Johnson, PRB 75, 235405 (2007)] is extended to a considerably large temperature range by combination with the quenching technique. Our results show that there are strong correlations between the possibilities for obtaining the ground-state structure and the temperatures. Larger possibilities can be obtained at relatively low temperatures (as corresponds to the pre-melting temperature range). Details of the structural correlation with the temperature are investigated by taking the Pt13 cluster as an example, which suggests a quite efficient method to obtain the lowest-energy geometries of metal clusters.

  17. Accounting for the richness of daily activities.

    PubMed

    White, Mathew P; Dolan, Paul

    2009-08-01

    Serious consideration is being given to the impact of private behavior and public policies on people's subjective well-being (SWB). A new approach to measuring well-being, the day reconstruction method (DRM), weights the affective component of daily activities by their duration in order to construct temporal aggregates. However, the DRM neglects the potentially important role of thoughts. By adapting this method to include thoughts as well as feelings, we provide perhaps the most comprehensive measure of SWB to date. We show that some activities relatively low in pleasure (e.g., work and time with children) are nonetheless thought of as rewarding and therefore contribute to overall SWB. Such information may be important to policymakers wishing to promote behaviors that are conducive to a broader conception of SWB.

  18. Stator and Rotor Flux Based Deadbeat Direct Torque Control of Induction Machines

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2003-01-01

    A new, deadbeat type of direct torque control is proposed, analyzed and experimentally verified in this paper. The control is based on stator and rotor flux as state variables. This choice of state variables allows a graphical representation which is transparent and insightful. The graphical solution shows the effects of realistic considerations such as voltage and current limits. A position and speed sensorless implementation of the control, based on the self-sensing signal injection technique, is also demonstrated experimentally for low speed operation. The paper first develops the new, deadbeat DTC methodology and graphical representation of the new algorithm. It then evaluates feasibility via simulation and experimentally demonstrates performance of the new method with a laboratory prototype including the sensorless methods.

  19. Stator and Rotor Flux Based Deadbeat Direct Torque Control of Induction Machines. Revision 1

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Lorenz, Robert D.

    2002-01-01

    A new, deadbeat type of direct torque control is proposed, analyzed, and experimentally verified in this paper. The control is based on stator and rotor flux as state variables. This choice of state variables allows a graphical representation which is transparent and insightful. The graphical solution shows the effects of realistic considerations such as voltage and current limits. A position and speed sensorless implementation of the control, based on the self-sensing signal injection technique, is also demonstrated experimentally for low speed operation. The paper first develops the new, deadbeat DTC methodology and graphical representation of the new algorithm. It then evaluates feasibility via simulation and experimentally demonstrates performance of the new method with a laboratory prototype including the sensorless methods.

  20. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  1. Considerations for ceramic inlays in posterior teeth: a review

    PubMed Central

    Hopp, Christa D; Land, Martin F

    2013-01-01

    This review of ceramic inlays in posterior teeth includes a review of the history of ceramic restorations, followed by common indications and contraindications for their use. A discussion on the potential for tooth wear is followed by a review of recommended preparation design considerations, fabrication methods, and material choices. Despite the improved materials available for fabrication of porcelain inlays, fracture remains a primary mode of inlay failure. Therefore, a brief discussion on strengthening methods for ceramics is included. The review concludes with a section on luting considerations, and offers the clinician specific recommendations for luting procedures. In conclusion, inlay success rates and longevity, as reported in the literature, are summarized. PMID:23750101

  2. Prediction of Process-Induced Distortions in L-Shaped Composite Profiles Using Path-Dependent Constitutive Law

    NASA Astrophysics Data System (ADS)

    Ding, Anxin; Li, Shuxin; Wang, Jihui; Ni, Aiqing; Sun, Liangliang; Chang, Lei

    2016-10-01

    In this paper, the corner spring-in angles of AS4/8552 L-shaped composite profiles with different thicknesses are predicted using path-dependent constitutive law with the consideration of material properties variation due to phase change during curing. The prediction accuracy mainly depends on the properties in the rubbery and glassy states obtained by homogenization method rather than experimental measurements. Both analytical and finite element (FE) homogenization methods are applied to predict the overall properties of AS4/8552 composite. The effect of fiber volume fraction on the properties is investigated for both rubbery and glassy states using both methods. And the predicted results are compared with experimental measurements for the glassy state. Good agreement is achieved between the predicted results and available experimental data, showing the reliability of the homogenization method. Furthermore, the corner spring-in angles of L-shaped composite profiles are measured experimentally and the reliability of path-dependent constitutive law is validated as well as the properties prediction by FE homogenization method.

  3. Analysis of light emitting diode array lighting system based on human vision: normal and abnormal uniformity condition.

    PubMed

    Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng

    2012-10-08

    In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.

  4. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  5. Non-target toxicity of synthetic insecticides on the biological performance and population growth of Bracon hebetor Say.

    PubMed

    Muslim, Mohammad; Ansari, M Shafiq; Hasan, Fazil

    2018-05-24

    Bracon hebetor Say (Hymenoptera: Braconidae) is an important biological control agent of various species of order Lepidoptera and extensively used in biological control program worldwide. Present study evaluated the lethal and sublethal effects of insecticides on B. hebetor using demographic and population growth parameters. Doses of all the tested insecticides were within a maximum range of their recommended field dosages and adults were treated using residual glass vials method. For control experiments adults were treated with distilled water. Among the tested insecticides, the survivorship of various stages of B. hebetor was considerably prolonged on cyantraniliprole followed by chlorantraniliprole and shortest on chlorpyrifos and profenofos treated group. Total immature development time was prolonged in chlorpyrifos and profenofos treated group. Population growth parameters like intrinsic rate of natural increase (r m ), net reproductive rate (R 0 ), finite rate of increase (λ) and mean generation time (T c ) were considerably reduced in B. hebetor groups treated with chlorpyrifos and profenofos. However, B. hebetor groups treated with chlorantraniliprole and cyantraniliprole showed a little or no much difference in population growth parameters when compared with untreated group. It was also observed that chlorpyrifos and profenofos modified the sex ratio, thereby female emergence get reduced. On the basis of present findings it can be concluded that all tested insecticides caused considerable ecotoxic effects on B. hebetor compared to control. However, comparisons among the tested insecticides on the basis of IOBC criteria showed that chlorantraniliprol and cyntraniliprol was less toxic as compared to other insecticides tested on this biological control agent.

  6. Gendered career considerations consolidate from the start of medical education

    PubMed Central

    Alers, Margret; Verdonk, Petra; Bor, Hans; Hamberg, Katarina; Lagro-Janssen, Antoine

    2014-01-01

    Objectives: To explore changes in specialty preferences and work-related topics during the theoretical phase of Dutch medical education and the role of gender. Methods: A cohort of medical students at Radboudumc, the Netherlands, was surveyed at start (N=612, 69.1% female) and after three years (N=519, 69.2% female), on specialty preferences, full-time or part-time work, motivational factors, and work-life issues. Chi square tests were performed to analyze gender-differences, and logistic regression to explore the influence of gender on considerations. Results: A total of 214 female and 78 male students completed both surveys. After three years, the male students remained highly interested in surgery, but the female students increasingly preferred gynecology. These initial preferences were predictive. Four out of five male students versus three out of five female students continued to show a full-time preference. Women increasingly preferred part-time work. After three years, the combination of work, care, and patient contact motivated female students more, whereas salary remained more important to male students. Female students indicated that their future careers would influence their family life; male students assumed having a family would only affect their partners’ careers. Conclusions: Against an international background of the feminization of medicine, our study shows that career considerations are reinforced early in medical studies. Women prefer to work fewer hours and anticipate care tasks more often. Students’ preferences reflect Dutch cultural norms about working men and women. Therefore, guidance in choice-making much earlier in medical education can create opportunities. PMID:25341228

  7. Evaluation of optical connectors for consideration in military avionics

    NASA Astrophysics Data System (ADS)

    Uhlhorn, Brian L.; Drexler, Gregory M.; Nelson, Ryan L.; Stevens, Rick C.

    2006-08-01

    This paper describes the method used to evaluate single-mode optical connectors under consideration for military avionics platforms. This testing is described in terms of the appropriate fiber optics test procedures (FOTPs) from the TIA/EIA-455 series.

  8. Laser-optical methods for earlier diagnostics of plant and seed diseases in various habitant media taking into consideration anthropogenic and biological pollution

    NASA Astrophysics Data System (ADS)

    Lisker, Joseph S.; Dmitriev, Andrey P.

    1999-12-01

    By the method of the computer laser-optical photometry the investigation of the cereal stability for the various diseases taken into consideration the stability of tomato seeds to their interaction with the phytopathogenes and the phytotoxicity of microscopic fungi on the wheat seedlings was carried out. Original result for the investigation of optical-physiological characteristics of plants and seeds are shown.

  9. Binding free energy analysis of protein-protein docking model structures by evERdock.

    PubMed

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-14

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  10. Binding free energy analysis of protein-protein docking model structures by evERdock

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-01

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  11. Erythemal UV at Davos (Switzerland), 1926-2003, estimated using total ozone, sunshine duration, and snow depth

    NASA Astrophysics Data System (ADS)

    Lindfors, Anders; Vuilleumier, Laurent

    2005-01-01

    A method previously developed for reconstructing daily erythemal UV doses at Sodankylä, northern Finland, was adjusted to the local conditions at Davos, Switzerland, and used for estimating the erythemal UV doses there over the period 1926-2003. The method uses total ozone, sunshine duration, and snow depth as input, and is based on the empirical relationship between relative sunshine duration and relative UV doses. In order to examine how the method behaves in different environments, the relationships found for Davos and Sodankylä were compared. This revealed that the surface albedo and the cloud climate have a comparable influence on the relationship found. Although the method is fairly simple, it accounts for the most important factors affecting the amount of UV radiation reaching the Earth's surface. A comparison between estimated UV doses and the corresponding observations with a broadband biometer at Davos demonstrated the good performance of the method. The correlation coefficient for daily values varies between 0.95 and 0.98 depending on time of year, and the corresponding root mean square error is typically of the order of 20%. The monthly mean values show considerably less scatter around the regression line with a root mean square error of 4%. The time series of estimated UV shows that the UV level at Davos has varied considerably throughout the period of this study, with high values in the middle of the 1940s, in the early 1960s, and in the 1990s. Variations in the estimated UV doses prior to 1980, e.g., a steady decrease from the early 1960s to the late 1970s, were found to be caused primarily by changes in sunshine duration. Since 1980, on the other hand, there has been a distinct increase in the UV level caused mainly by the diminution of total ozone. This increase is most clearly seen during winter and spring, while the decrease from the early 1960s to the late 1970s is most pronounced during summer.

  12. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  13. Pre-analytical method for NMR-based grape metabolic fingerprinting and chemometrics.

    PubMed

    Ali, Kashif; Maltese, Federica; Fortes, Ana Margarida; Pais, Maria Salomé; Verpoorte, Robert; Choi, Young Hae

    2011-10-10

    Although metabolomics aims at profiling all the metabolites in organisms, data quality is quite dependent on the pre-analytical methods employed. In order to evaluate current methods, different pre-analytical methods were compared and used for the metabolic profiling of grapevine as a model plant. Five grape cultivars from Portugal in combination with chemometrics were analyzed in this study. A common extraction method with deuterated water and methanol was found effective in the case of amino acids, organic acids, and sugars. For secondary metabolites like phenolics, solid phase extraction with C-18 cartridges showed good results. Principal component analysis, in combination with NMR spectroscopy, was applied and showed clear distinction among the cultivars. Primary metabolites such as choline, sucrose, and leucine were found discriminating for 'Alvarinho', while elevated levels of alanine, valine, and acetate were found in 'Arinto' (white varieties). Among the red cultivars, higher signals for citrate and GABA in 'Touriga Nacional', succinate and fumarate in 'Aragonês', and malate, ascorbate, fructose and glucose in 'Trincadeira', were observed. Based on the phenolic profile, 'Arinto' was found with higher levels of phenolics as compared to 'Alvarinho'. 'Trincadeira' showed lowest phenolics content while higher levels of flavonoids and phenylpropanoids were found in 'Aragonês' and 'Touriga Nacional', respectively. It is shown that the metabolite composition of the extract is highly affected by the extraction procedure and this consideration has to be taken in account for metabolomics studies. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  15. Special Consideration in Post-Secondary Institutions: Trends at a Canadian University

    ERIC Educational Resources Information Center

    Zimmermann, Joelle; Kamenetsky, Stuart B.; Pongracic, Syb

    2015-01-01

    This study examined trends in the practice of granting special consideration for missed tests and late papers in colleges and universities. We analyzed a database of 4,183 special consideration requests at a large Canadian university between 1998 and 2008. Results show a growing rate of requests per enrolment between 2001 and 2007. Although…

  16. Design considerations for multielectron double quantum dot qubits in silicon

    NASA Astrophysics Data System (ADS)

    Nielsen, Erik; Barnes, Edwin; Kestner, Jason

    2014-03-01

    Solid state double quantum dot (DQD) spin qubits can be created by confining two electrons to a DQD potential. We present results showing the viability and potential advantages of creating a DQD spin qubit with greater than two electrons, and which suggest that silicon devices which could realize these advantages are experimentally possible. Our analysis of a six-electron DQD uses full configuration interaction methods and shows an isolated qubit space in regimes which 3D quantum device simulations indicate are accessible experimentally. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  17. Scaling effects in direct shear tests

    USGS Publications Warehouse

    Orlando, A.D.; Hanes, D.M.; Shen, H.H.

    2009-01-01

    Laboratory experiments of the direct shear test were performed on spherical particles of different materials and diameters. Results of the bulk friction vs. non-dimensional shear displacement are presented as a function of the non-dimensional particle diameter. Simulations of the direct shear test were performed using the Discrete Element Method (DEM). The simulation results show Considerable differences with the physical experiments. Particle level material properties, such as the coefficients of static friction, restitution and rolling friction need to be known a priori in order to guarantee that the simulation results are an accurate representation of the physical phenomenon. Furthermore, laboratory results show a clear size dependency on the results, with smaller particles having a higher bulk friction than larger ones. ?? 2009 American Institute of Physics.

  18. Optimal Protocols and Optimal Transport in Stochastic Thermodynamics

    NASA Astrophysics Data System (ADS)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-01

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  19. Optimal protocols and optimal transport in stochastic thermodynamics.

    PubMed

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-24

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  20. Assessment of cardiac fibrosis: a morphometric method comparison for collagen quantification.

    PubMed

    Schipke, Julia; Brandenberger, Christina; Rajces, Alexandra; Manninger, Martin; Alogna, Alessio; Post, Heiner; Mühlfeld, Christian

    2017-04-01

    Fibrotic remodeling of the heart is a frequent condition linked to various diseases and cardiac dysfunction. Collagen quantification is an important objective in cardiac fibrosis research; however, a variety of different histological methods are currently used that may differ in accuracy. Here, frequently applied collagen quantification techniques were compared. A porcine model of early stage heart failure with preserved ejection fraction was used as an example. Semiautomated threshold analyses were imprecise, mainly due to inclusion of noncollagen structures or failure to detect certain collagen deposits. In contrast, collagen assessment by automated image analysis and light microscopy (LM)-stereology was more sensitive. Depending on the quantification method, the amount of estimated collagen varied and influenced intergroup comparisons. PicroSirius Red, Masson's trichrome, and Azan staining protocols yielded similar results, whereas the measured collagen area increased with increasing section thickness. Whereas none of the LM-based methods showed significant differences between the groups, electron microscopy (EM)-stereology revealed a significant collagen increase between cardiomyocytes in the experimental group, but not at other localizations. In conclusion, in contrast to the staining protocol, section thickness and the quantification method being used directly influence the estimated collagen content and thus, possibly, intergroup comparisons. EM in combination with stereology is a precise and sensitive method for collagen quantification if certain prerequisites are considered. For subtle fibrotic alterations, consideration of collagen localization may be necessary. Among LM methods, LM-stereology and automated image analysis are appropriate to quantify fibrotic changes, the latter depending on careful control of algorithm and comparable section staining. NEW & NOTEWORTHY Direct comparison of frequently applied histological fibrosis assessment techniques revealed a distinct relation of measured collagen and utilized quantification method as well as section thickness. Besides electron microscopy-stereology, which was precise and sensitive, light microscopy-stereology and automated image analysis proved to be appropriate for collagen quantification. Moreover, consideration of collagen localization might be important in revealing minor fibrotic changes. Copyright © 2017 the American Physiological Society.

  1. A Systematic Comparison of Linear Regression-Based Statistical Methods to Assess Exposome-Health Associations.

    PubMed

    Agier, Lydiane; Portengen, Lützen; Chadeau-Hyam, Marc; Basagaña, Xavier; Giorgis-Allemand, Lise; Siroux, Valérie; Robinson, Oliver; Vlaanderen, Jelle; González, Juan R; Nieuwenhuijsen, Mark J; Vineis, Paolo; Vrijheid, Martine; Slama, Rémy; Vermeulen, Roel

    2016-12-01

    The exposome constitutes a promising framework to improve understanding of the effects of environmental exposures on health by explicitly considering multiple testing and avoiding selective reporting. However, exposome studies are challenged by the simultaneous consideration of many correlated exposures. We compared the performances of linear regression-based statistical methods in assessing exposome-health associations. In a simulation study, we generated 237 exposure covariates with a realistic correlation structure and with a health outcome linearly related to 0 to 25 of these covariates. Statistical methods were compared primarily in terms of false discovery proportion (FDP) and sensitivity. On average over all simulation settings, the elastic net and sparse partial least-squares regression showed a sensitivity of 76% and an FDP of 44%; Graphical Unit Evolutionary Stochastic Search (GUESS) and the deletion/substitution/addition (DSA) algorithm revealed a sensitivity of 81% and an FDP of 34%. The environment-wide association study (EWAS) underperformed these methods in terms of FDP (average FDP, 86%) despite a higher sensitivity. Performances decreased considerably when assuming an exposome exposure matrix with high levels of correlation between covariates. Correlation between exposures is a challenge for exposome research, and the statistical methods investigated in this study were limited in their ability to efficiently differentiate true predictors from correlated covariates in a realistic exposome context. Although GUESS and DSA provided a marginally better balance between sensitivity and FDP, they did not outperform the other multivariate methods across all scenarios and properties examined, and computational complexity and flexibility should also be considered when choosing between these methods. Citation: Agier L, Portengen L, Chadeau-Hyam M, Basagaña X, Giorgis-Allemand L, Siroux V, Robinson O, Vlaanderen J, González JR, Nieuwenhuijsen MJ, Vineis P, Vrijheid M, Slama R, Vermeulen R. 2016. A systematic comparison of linear regression-based statistical methods to assess exposome-health associations. Environ Health Perspect 124:1848-1856; http://dx.doi.org/10.1289/EHP172.

  2. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  3. On the multi-reference nature of plutonium oxides: PuO22+, PuO2, PuO3 and PuO2(OH)2.

    PubMed

    Boguslawski, Katharina; Réal, Florent; Tecmer, Paweł; Duperrouzel, Corinne; Gomes, André Severo Pereira; Legeza, Örs; Ayers, Paul W; Vallet, Valérie

    2017-02-08

    Actinide-containing complexes present formidable challenges for electronic structure methods due to the large number of degenerate or quasi-degenerate electronic states arising from partially occupied 5f and 6d shells. Conventional multi-reference methods can treat active spaces that are often at the upper limit of what is required for a proper treatment of species with complex electronic structures, leaving no room for verifying their suitability. In this work we address the issue of properly defining the active spaces in such calculations, and introduce a protocol to determine optimal active spaces based on the use of the Density Matrix Renormalization Group algorithm and concepts of quantum information theory. We apply the protocol to elucidate the electronic structure and bonding mechanism of volatile plutonium oxides (PuO 3 and PuO 2 (OH) 2 ), species associated with nuclear safety issues for which little is known about the electronic structure and energetics. We show how, within a scalar relativistic framework, orbital-pair correlations can be used to guide the definition of optimal active spaces which provide an accurate description of static/non-dynamic electron correlation, as well as to analyse the chemical bonding beyond a simple orbital model. From this bonding analysis we are able to show that the addition of oxo- or hydroxo-groups to the plutonium dioxide species considerably changes the π-bonding mechanism with respect to the bare triatomics, resulting in bent structures with a considerable multi-reference character.

  4. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI

    PubMed Central

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233

  5. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI.

    PubMed

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.

  6. Introduction of the trapezoidal thermodynamic technique method for measuring and mapping the efficiency of waste-to-energy plants: A potential replacement to the R1 formula.

    PubMed

    Vakalis, Stergios; Moustakas, Konstantinos; Loizidou, Maria

    2018-06-01

    Waste-to-energy plants have the peculiarity of being considered both as energy production and as waste destruction facilities and this distinction is important for legislative reasons. The efficiency of waste-to-energy plants must be objective and consistent, independently if the focus is the production of energy, the destruction of waste or the recovery/upgrade of materials. With the introduction of polygeneration technologies, like gasification, the production of energy and the recovery/upgrade of materials, are interconnected. The existing methodology for assessing the efficiency of waste-to-energy plants is the R1 formula, which does not take into consideration the full spectrum of the operations that take place in waste-to-energy plants. This study introduces a novel methodology for assessing the efficiency of waste-to-energy plants and is defined as the 3T method, which stands for 'trapezoidal thermodynamic technique'. The 3T method is an integrated approach for assessing the efficiency of waste-to-energy plants, which takes into consideration not only the production of energy but also the quality of the products. The value that is returned from the 3T method can be placed in a tertiary diagram and the global efficiency map of waste-to-energy plants can be produced. The application of the 3T method showed that the waste-to-energy plants with high combined heat and power efficiency and high recovery of materials are favoured and these outcomes are in accordance with the cascade principle and with the high cogeneration standards that are set by the EU Energy Efficiency Directive.

  7. Production of superparamagnetic nanobiocatalysts for green chemistry applications.

    PubMed

    Gasser, Christoph A; Ammann, Erik M; Schäffer, Andreas; Shahgaldian, Patrick; Corvini, Philippe F-X

    2016-08-01

    Immobilization of enzymes on solid supports is a convenient method for increasing enzymatic stability and enabling enzyme reuse. In the present work, a sorption-assisted surface conjugation method was developed and optimized to immobilize enzymes on the surface of superparamagnetic nanoparticles. An oxidative enzyme, i.e., laccase from Trametes versicolor was used as model enzyme. The immobilization method consists of the production of superparamagnetic nanoparticles by co-precipitation of FeCl2 and FeCl3. Subsequently, the particle surface is modified with an organosilane containing an amino group. Next, the enzymes are adsorbed on the particle surface before a cross-linking agent, i.e., glutaraldehyde is added which links the amino groups on the particle surface with the amino groups of the enzymes and leads to internal cross-linking of the enzymes as well. The method was optimized using response surface methodology regarding optimal enzyme and glutaraldehyde amounts, pH, and reaction times. Results allowed formulation of biocatalysts having high specific enzymatic activity and improved stability. The biocatalysts showed considerably higher stability compared with the dissolved enzymes over a pH range from 3 to 9 and in the presence of several chemical denaturants. To demonstrate the reusability of the immobilized enzymes, they were applied as catalysts for the production of a phenoxazinone dye. Virtually, 100 % of the precursor was transformed to the dye in each of the ten conducted reaction cycles while on average 84.5 % of the enzymatic activity present at the beginning of a reaction cycle was retained after each cycle highlighting the considerable potential of superparamagnetic biocatalysts for application in industrial processes.

  8. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    NASA Astrophysics Data System (ADS)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  9. Isolation and identification of Salmonella from curry samples and its sensitivity to commercial antibiotics and aqueous extracts of Camelia sinensis (L.) and Trachyspermum ammi (L.)

    PubMed Central

    Gunasegaran, Thanes; Rathinam, Xavier; Kasi, Marimuthu; Sathasivam, Kathiresan; Sreenivasan, Sasidharan; Subramaniam, Sreeramanan

    2011-01-01

    Objective To isolate Salmonella from curry samples and to evaluate the drug sensitivity of the food-borne Salmonella and its susceptibility to specific plant extracts. Methods Salmonella was isolated from the curry samples by standard microbiological methods and was confirmed by biochemical tests. The antibiotic susceptibility test was conducted by disc diffusion method using commercially available antibiotics such as ampicillin, tetracycline, chloramphenicol, kanamycin, and penicillin. In addition, the susceptibility of the food-borne Salmonella was also evaluated against the aqueous extracts of Camelia sinensis (L.) Theaceae (tea leaves) and the Trachyspermum ammi (L.) Apiaceae ( ajwain or omum seeds). Results Out of fifty curry samples, only seven samples were identified to have Salmonella contamination. The Salmonella isolates showed a significant drug resistance pattern except for kanamycin. The plant extracts showed a considerable antibacterial activity against the isolates, indicating the presence of antimicrobial principle which can be exploited after complete pharmacological investigations. Conclusions The present study demonstrates the occurrence of Salmonella in the curry samples, and shows significant drug resistance against most of the commercially available antibiotics, except kanamycin. Antimicrobial effect of the plant extracts against the food-bone Salmonella suggests that dietary including medicinal herbs would be one strategy to manage food borne pathogens. PMID:23569772

  10. Recommended Practice for Use of Emissive Probes in Electric Propulsion Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehan, J. P.; Raitses, Yevgeny; Hershkowitz, Noah

    Here, this article provides recommended methods for building, operating, and taking plasma potential measurements from electron-emitting probes in electric propulsion devices, including Hall thrusters, gridded ion engines, and others. The two major techniques, the floating point technique and the inflection point technique, are described in detail as well as calibration and error-reduction methods. The major heating methods are described as well as the various considerations for emissive probe construction. Lastly, special considerations for electric propulsion plasmas are addressed, including high-energy densities, ion flows, magnetic fields, and potential fluctuations. Recommendations for probe design and operation are provided.

  11. Recommended Practice for Use of Emissive Probes in Electric Propulsion Testing

    DOE PAGES

    Sheehan, J. P.; Raitses, Yevgeny; Hershkowitz, Noah; ...

    2016-11-03

    Here, this article provides recommended methods for building, operating, and taking plasma potential measurements from electron-emitting probes in electric propulsion devices, including Hall thrusters, gridded ion engines, and others. The two major techniques, the floating point technique and the inflection point technique, are described in detail as well as calibration and error-reduction methods. The major heating methods are described as well as the various considerations for emissive probe construction. Lastly, special considerations for electric propulsion plasmas are addressed, including high-energy densities, ion flows, magnetic fields, and potential fluctuations. Recommendations for probe design and operation are provided.

  12. Estimating serial correlation and self-similarity in financial time series-A diversification approach with applications to high frequency data

    NASA Astrophysics Data System (ADS)

    Gerlich, Nikolas; Rostek, Stefan

    2015-09-01

    We derive a heuristic method to estimate the degree of self-similarity and serial correlation in financial time series. Especially, we propagate the use of a tailor-made selection of different estimation techniques that are used in various fields of time series analysis but until now have not consequently found their way into the finance literature. Following the idea of portfolio diversification, we show that considerable improvements with respect to robustness and unbiasedness can be achieved by using a basket of estimation methods. With this methodological toolbox at hand, we investigate real market data to show that noticeable deviations from the assumptions of constant self-similarity and absence of serial correlation occur during certain periods. On the one hand, this may shed a new light on seemingly ambiguous scientific findings concerning serial correlation of financial time series. On the other hand, a proven time-changing degree of self-similarity may help to explain high-volatility clusters of stock price indices.

  13. New similarity of triangular fuzzy number and its application.

    PubMed

    Zhang, Xixiang; Ma, Weimin; Chen, Liping

    2014-01-01

    The similarity of triangular fuzzy numbers is an important metric for application of it. There exist several approaches to measure similarity of triangular fuzzy numbers. However, some of them are opt to be large. To make the similarity well distributed, a new method SIAM (Shape's Indifferent Area and Midpoint) to measure triangular fuzzy number is put forward, which takes the shape's indifferent area and midpoint of two triangular fuzzy numbers into consideration. Comparison with other similarity measurements shows the effectiveness of the proposed method. Then, it is applied to collaborative filtering recommendation to measure users' similarity. A collaborative filtering case is used to illustrate users' similarity based on cloud model and triangular fuzzy number; the result indicates that users' similarity based on triangular fuzzy number can obtain better discrimination. Finally, a simulated collaborative filtering recommendation system is developed which uses cloud model and triangular fuzzy number to express users' comprehensive evaluation on items, and result shows that the accuracy of collaborative filtering recommendation based on triangular fuzzy number is higher.

  14. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  15. Resistance of dichromated gelatin as photoresist

    NASA Astrophysics Data System (ADS)

    Lin, Pang; Yan, Yingbai; Jin, Guofan; Wu, Minxian

    1999-09-01

    Based on the photographic chemistry, chemically hardening method was selected to enhance the anti-etch capability of gelatin. With the consideration of hardener and permeating processing, formaldehyde is the most ideal option due to the smallest molecule size and covalent cross-link with gelatin. After hardened in formaldehyde, the resistance of the gelatin was obtained by etched in 1% HF solution. The result showed that anti-etch capability of the gelatin layer increased with tanning time, but the increasing rate reduced gradually and tended to saturation. Based on the experimental results, dissolving-flaking hypothesis for chemically hardening gelatin was presented. Sol-gel coatings were etched with 1% HF solution. Compared with the etching rate of gelatin layer, it showed that gelatin could be used as resist to fabricate optical elements in sol-gel coating. With the cleaving-etch method and hardening of dichromated gelatin (DCG), DCG was used as a photoresist for fabricating sol-gel optical elements. As an application, a sol-gel random phase plate was fabricated.

  16. A noise-immune cryptographic information protection method for facsimile information transmission and the realization algorithms

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Ogorodnik, Konstantin V.

    2006-04-01

    We analyse the existent methods of cryptographic defence for the facsimile information transfer, consider their shortcomings and prove the necessity of better information protection degree. The method of information protection that is based on presentation of input data as images is proposed. We offer a new noise-immune algorithm for realization of this method which consists in transformation of an input frame by pixels transposition according to an entered key. At decoding mode the reverse transformation of image with the use of the same key is used. Practical realization of the given method takes into account noise in the transmission channels and information distortions by scanners, faxes and others like that. We show that the given influences are reduced to the transformation of the input image coordinates. We show the algorithm in detail and consider its basic steps. We show the possibility of the offered method by the means of the developed software. The realized algorithm corrects curvature of frames: turn, scaling, fallout of pixels and others like that. At low noise level (loss of pixel information less than 10 percents) it is possible to encode, transfer and decode any types of images and texts with 12-size font character. The software filters for information restore and noise removing allow to transfer fax data with 30 percents pixels loss at 18-size font text. This percent of data loss can be considerably increased by the use of the software character recognition block that can be realized on fuzzy-neural algorithms. Examples of encoding and decryption of images and texts are shown.

  17. Research on environmental impact of water-based fire extinguishing agents

    NASA Astrophysics Data System (ADS)

    Wang, Shuai

    2018-02-01

    This paper offers current status of application of water-based fire extinguishing agents, the environmental and research considerations of the need for the study of toxicity research. This paper also offers systematic review of test methods of toxicity and environmental impact of water-based fire extinguishing agents currently available, illustrate the main requirements and relevant test methods, and offer some research findings for future research considerations. The paper also offers limitations of current study.

  18. Review of methods for measuring β-cell function: Design considerations from the Restoring Insulin Secretion (RISE) Consortium.

    PubMed

    Hannon, Tamara S; Kahn, Steven E; Utzschneider, Kristina M; Buchanan, Thomas A; Nadeau, Kristen J; Zeitler, Philip S; Ehrmann, David A; Arslanian, Silva A; Caprio, Sonia; Edelstein, Sharon L; Savage, Peter J; Mather, Kieren J

    2018-01-01

    The Restoring Insulin Secretion (RISE) study was initiated to evaluate interventions to slow or reverse the progression of β-cell failure in type 2 diabetes (T2D). To design the RISE study, we undertook an evaluation of methods for measurement of β-cell function and changes in β-cell function in response to interventions. In the present paper, we review approaches for measurement of β-cell function, focusing on methodologic and feasibility considerations. Methodologic considerations included: (1) the utility of each technique for evaluating key aspects of β-cell function (first- and second-phase insulin secretion, maximum insulin secretion, glucose sensitivity, incretin effects) and (2) tactics for incorporating a measurement of insulin sensitivity in order to adjust insulin secretion measures for insulin sensitivity appropriately. Of particular concern were the capacity to measure β-cell function accurately in those with poor function, as is seen in established T2D, and the capacity of each method for demonstrating treatment-induced changes in β-cell function. Feasibility considerations included: staff burden, including time and required methodological expertise; participant burden, including time and number of study visits; and ease of standardizing methods across a multicentre consortium. After this evaluation, we selected a 2-day measurement procedure, combining a 3-hour 75-g oral glucose tolerance test and a 2-stage hyperglycaemic clamp procedure, augmented with arginine. © 2017 John Wiley & Sons Ltd.

  19. Finite Element Analysis of the Maximum Stress at the Joints of the Transmission Tower

    NASA Astrophysics Data System (ADS)

    Itam, Zarina; Beddu, Salmia; Liyana Mohd Kamal, Nur; Bamashmos, Khaled H.

    2016-03-01

    Transmission towers are tall structures, usually a steel lattice tower, used to support an overhead power line. Usually, transmission towers are analyzed as frame-truss systems and the members are assumed to be pin-connected without explicitly considering the effects of joints on the tower behavior. In this research, an engineering example of joint will be analyzed with the consideration of the joint detailing to investigate how it will affect the tower analysis. A static analysis using STAAD Pro was conducted to indicate the joint with the maximum stress. This joint will then be explicitly analyzed in ANSYS using the Finite Element Method. Three approaches were used in the software which are the simple plate model, bonded contact with no bolts, and beam element bolts. Results from the joint analysis show that stress values increased with joint details consideration. This proves that joints and connections play an important role in the distribution of stress within the transmission tower.

  20. Macroporous monoliths for trace metal extraction from seawater

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Yanfeng; Mayes, Richard; Gill, Gary A.

    2015-05-29

    The viability of seawater-based uranium recovery depends on the uranium adsorption rate and capacity, since the concentration of uranium in the oceans is relatively low (3.3 μgL⁻¹). An important consideration for a fast adsorption is to maximize the adsorption properties of adsorbents such as surface areas and pore structures, which can greatly improve the kinetics of uranium extraction and the adsorption capacity simultaneously. Following this consideration, macroporous monolith adsorbents were prepared from the copolymerization of acrylonitrile (AN) and N, N’-methylenebis(acrylamide) (MBAAm) based on a cryogel method using both hydrophobic and hydrophilic monomers. The monolithic sorbents were tested with simulated seawatermore » containing a high uranyl concentration (–6 ppm) and the uranium adsorption results showed that the adsorption capacities are strongly influenced by the ratio of monomer to the crosslinker, i.e., the density of the amidoxime groups. The preliminary seawater testing indicates the high salinity content of seawater does not hinder the adsorption of uranium.« less

  1. Macroporous monoliths for trace metal extraction from seawater

    DOE PAGES

    Yue, Yanfeng; Mayes, Richard T.; Gill, Gary; ...

    2015-05-29

    The viability of seawater-based uranium recovery depends on the uranium adsorption rate and capacity, since the concentration of uranium in the oceans is relatively low (3.3 gL -1). An important consideration for a fast adsorption is to maximize the adsorption properties of adsorbents such as surface areas and pore structures, which can greatly improve the kinetics of uranium extraction and the adsorption capacity simultaneously. Following this consideration, macroporous monolith adsorbents were prepared from the copolymerization of acrylonitrile (AN) and N,N -methylenebis(acrylamide) (MBAAm) based on a cryogel method using both hydrophobic and hydrophilic monomers. The monolithic sorbents were tested with simulatedmore » seawater containing a high uranyl concentration (–6 ppm) and the uranium adsorption results showed that the adsorption capacities are strongly influenced by the ratio of monomer to the crosslinker, i.e., the density of the amidoxime groups. Furthermore, the preliminary seawater testing indicates the high salinity content of seawater does not hinder the adsorption of uranium.« less

  2. Identification of the faecal indicator Escherichia coli in wastewater through the β-D-glucuronidase activity: comparison between two enumeration methods, membrane filtration with TBX agar, and Colilert®-18.

    PubMed

    Vergine, P; Salerno, C; Barca, E; Berardi, G; Pollice, A

    2017-04-01

    Escherichia coli (E. coli) is one of the most commonly adopted indicators for the determination of the microbiological quality in water and treated wastewater. Two main types of methods are used for the enumeration of this faecal indicator: membrane filtration (MF) and enzyme substrate tests. For both types, several substrates based on the β-D-glucuronidase activity have been commercialized. The specificity of this enzyme for E. coli bacteria has generated considerable use of methods that identify the β-D-glucuronidase activity as a definite indication of the presence of E. coli, without any further confirmation. This approach has been recently questioned for the application to wastewater. The present study compares two methods belonging to the above-mentioned types for the enumeration of E. coli in wastewater: MF with Tryptone Bile X-glucuronide agar and the Colilert ® -18 test. Confirmation tests showed low average percentages of false positives and false negatives for both enumeration methods (between 4 and 11%). Moreover, the counting capabilities of these two methods were compared for a set of 70 samples of wastewater having different origins and degrees of treatment. Statistical analysis showed that the Colilert ® -18 test allowed on average for a significantly higher recovery of E. coli.

  3. Considerations for ex vivo thermal tissue testing exemplified using the fresh porcine longissimus muscle model for endometrial ablation

    NASA Astrophysics Data System (ADS)

    Fugett, James H.; Bennett, Haydon E.; Shrout, Joshua L.; Coad, James E.

    2017-02-01

    Expansions in minimally invasive medical devices and technologies with thermal mechanisms of action are continuing to advance the practice of medicine. These expansions have led to an increasing need for appropriate animal models to validate and quantify device performance. The planning of these studies should take into consideration a variety of parameters, including the appropriate animal model (test system - ex vivo or in vivo; species; tissue type), treatment conditions (test conditions), predicate device selection (as appropriate, control article), study timing (Day 0 acute to more than Day 90 chronic survival studies), and methods of tissue analysis (tissue dissection - staining methods). These considerations are discussed and illustrated using the fresh extirpated porcine longissimus muscle model for endometrial ablation.

  4. Some important considerations in the development of stress corrosion cracking test methods.

    NASA Technical Reports Server (NTRS)

    Wei, R. P.; Novak, S. R.; Williams, D. P.

    1972-01-01

    Discussion of some of the precaution needs the development of fracture-mechanics based test methods for studying stress corrosion cracking involves. Following a review of pertinent analytical fracture mechanics considerations and of basic test methods, the implications for test corrosion cracking studies of the time-to-failure determining kinetics of crack growth and life are examined. It is shown that the basic assumption of the linear-elastic fracture mechanics analyses must be clearly recognized and satisfied in experimentation and that the effects of incubation and nonsteady-state crack growth must also be properly taken into account in determining the crack growth kinetics, if valid data are to be obtained from fracture-mechanics based test methods.

  5. Mathematical modelling of convective processes in a weld pool under electric arc surfacing

    NASA Astrophysics Data System (ADS)

    Sarychev, V. D.; Granovskii, A. Yu; Nevskii, S. A.; Konovalov, S. V.

    2017-01-01

    The authors develop the mathematical model of convective processes in a molten pool under electric arc surfacing with flux-cored wire. The model is based on the ideas of how convective flows appear due to temperature gradient and action of electromagnetic forces. Influence of alloying elements in the molten metal was modeled as a non-linear dependence of surface tension upon temperature. Surface tension and its temperature coefficient were calculated according to the electron density functional method with consideration to asymmetric electron distribution at the interface “molten metal / shielding gas”. Simultaneous solution of Navier-Stokes and Maxwell equations according to finite elements method with consideration to the moving heat source at the interface showed that there is a multi-vortex structure in the molten metal. This structure gives rise to a downward heat flux which, at the stage of heating, moves from the centre of the pool and stirs it full width. At the cooling stage this flux moves towards the centre of the pool and a single vortex is formed near the symmetry centre. This flux penetration is ∼ 10 mm. Formation of the downward heat flux is determined by sign reversal of the temperature coefficient of surface tension due to the presence of alloying elements.

  6. Using in vitro/in silico data for consumer safety assessment of feed flavoring additives--A feasibility study using piperine.

    PubMed

    Thiel, A; Etheve, S; Fabian, E; Leeman, W R; Plautz, J R

    2015-10-01

    Consumer health risk assessment for feed additives is based on the estimated human exposure to the additive that may occur in livestock edible tissues compared to its hazard. We present an approach using alternative methods for consumer health risk assessment. The aim was to use the fewest possible number of animals to estimate its hazard and human exposure without jeopardizing the safety upon use. As an example we selected the feed flavoring substance piperine and applied in silico modeling for residue estimation, results from literature surveys, and Read-Across to assess metabolism in different species. Results were compared to experimental in vitro metabolism data in rat and chicken, and to quantitative analysis of residues' levels from the in vivo situation in livestock. In silico residue modeling showed to be a worst case: the modeled residual levels were considerably higher than the measured residual levels. The in vitro evaluation of livestock versus rodent metabolism revealed no major differences in metabolism between the species. We successfully performed a consumer health risk assessment without performing additional animal experiments. As shown, the use and combination of different alternative methods supports animal welfare consideration and provides future perspective to reducing the number of animals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Multi-period project portfolio selection under risk considerations and stochastic income

    NASA Astrophysics Data System (ADS)

    Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood

    2018-02-01

    This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.

  8. Review of invasive urodynamics and progress towards non-invasive measurements in the assessment of bladder outlet obstruction

    PubMed Central

    Griffiths, C. J.; Pickard, R. S.

    2009-01-01

    Objective: This article defines the need for objective measurements to help diagnose the cause of lower urinary tract symptoms (LUTS). It describes the conventional techniques available, mainly invasive, and then summarizes the emerging range of non-invasive measurement techniques. Methods: This is a narrative review derived form the clinical and scientific knowledge of the authors together with consideration of selected literature. Results: Consideration of measured bladder pressure urinary flow rate during voiding in an invasive pressure flow study is considered the gold standard for categorization of bladder outlet obstruction (BOO). The diagnosis is currently made by plotting the detrusor pressure at maximum flow (pdetQmax) and maximum flow rate (Qmax) on the nomogram approved by the International Continence Society. This plot will categorize the void as obstructed, equivocal or unobstructed. The invasive and relatively complex nature of this investigation has led to a number of inventive techniques to categorize BOO either by measuring bladder pressure non-invasively or by providing a proxy measure such as bladder weight. Conclusion: Non-invasive methods of diagnosing BOO show great promise and a few have reached the stage of being commercially available. Further studies are however needed to validate the measurement technique and assess their worth in the assessment of men with LUTS. PMID:19468436

  9. Laser Scanning Systems and Techniques in Rockfall Source Identification and Risk Assessment: A Critical Review

    NASA Astrophysics Data System (ADS)

    Fanos, Ali Mutar; Pradhan, Biswajeet

    2018-04-01

    Rockfall poses risk to people, their properties and to transportation ways in mountainous and hilly regions. This catastrophe shows various characteristics such as vast distribution, sudden occurrence, variable magnitude, strong fatalness and randomicity. Therefore, prediction of rockfall phenomenon both spatially and temporally is a challenging task. Digital Terrain model (DTM) is one of the most significant elements in rockfall source identification and risk assessment. Light detection and ranging (LiDAR) is the most advanced effective technique to derive high-resolution and accurate DTM. This paper presents a critical overview of rockfall phenomenon (definition, triggering factors, motion modes and modeling) and LiDAR technique in terms of data pre-processing, DTM generation and the factors that can be obtained from this technique for rockfall source identification and risk assessment. It also reviews the existing methods that are utilized for the evaluation of the rockfall trajectories and their characteristics (frequency, velocity, bouncing height and kinetic energy), probability, susceptibility, hazard and risk. Detail consideration is given on quantitative methodologies in addition to the qualitative ones. Various methods are demonstrated with respect to their application scales (local and regional). Additionally, attention is given to the latest improvement, particularly including the consideration of the intensity of the phenomena and the magnitude of the events at chosen sites.

  10. Assessment of Differential Item Functioning in Health-Related Outcomes: A Simulation and Empirical Analysis with Hierarchical Polytomous Data.

    PubMed

    Sharafi, Zahra; Mousavi, Amin; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    The purpose of this study was to evaluate the effectiveness of two methods of detecting differential item functioning (DIF) in the presence of multilevel data and polytomously scored items. The assessment of DIF with multilevel data (e.g., patients nested within hospitals, hospitals nested within districts) from large-scale assessment programs has received considerable attention but very few studies evaluated the effect of hierarchical structure of data on DIF detection for polytomously scored items. The ordinal logistic regression (OLR) and hierarchical ordinal logistic regression (HOLR) were utilized to assess DIF in simulated and real multilevel polytomous data. Six factors (DIF magnitude, grouping variable, intraclass correlation coefficient, number of clusters, number of participants per cluster, and item discrimination parameter) with a fully crossed design were considered in the simulation study. Furthermore, data of Pediatric Quality of Life Inventory™ (PedsQL™) 4.0 collected from 576 healthy school children were analyzed. Overall, results indicate that both methods performed equivalently in terms of controlling Type I error and detection power rates. The current study showed negligible difference between OLR and HOLR in detecting DIF with polytomously scored items in a hierarchical structure. Implications and considerations while analyzing real data were also discussed.

  11. Chemical constituents and biological activities of Dianthus elegans var. elegans.

    PubMed

    Mutlu, Kiymet; Sarikahya, Nazli Boke; Nalbantsoy, Ayse; Kirmizigul, Suheyla

    2018-06-01

    Chemical investigation of the aerial parts of Dianthus elegans var. elegans afforded two previously undescribed saponins, named dianosides M-N (1-2), together with four oleanane-type triterpenoid glycosides (3-6). Their structures were elucidated as 3-O-α-L-arabinofuranosyl-16α-hydroxyolean-12-ene-23α, 28β-dioic acid (1) and 3-O-α-L-arabinofuranosyl-(1 → 3)-β-D-glucopyranosyl 16α-hydroxyolean-12-ene-23α-oic acid, 28-O-β-D-glucopyranosyl-(1 → 6)-β-D-glycosyl ester (2) by chemical and extensive spectroscopic methods including IR, 1D, 2D NMR and HRESIMS. Both of the saponins were evaluated for their cytotoxicities against HEK-293, A-549 and HeLa human cancer cells using the MTT method. All compounds showed no substantial cytotoxic activity against tested cell lines. However, dianosides M-N and the n-butanol fraction exhibited considerable haemolysis in human erythrocyte cells. The immunomodulatory properties of dianosides M-N were also evaluated in activated whole blood cells by PMA plus ionomycin. Dianosides M-N increased IL-1β concentration significantly whereas the n-butanol fraction slightly augmented IL-1β secretion. All compounds did not change IL-2 and IFN-γ levels considerably.

  12. Key considerations in designing a patient navigation program for colorectal cancer screening.

    PubMed

    DeGroff, Amy; Coa, Kisha; Morrissey, Kerry Grace; Rohan, Elizabeth; Slotman, Beth

    2014-07-01

    Colorectal cancer is the second leading cause of cancer mortality among those cancers affecting both men and women. Screening is known to reduce mortality by detecting cancer early and through colonoscopy, removing precancerous polyps. Only 58.6% of adults are currently up-to-date with colorectal cancer screening by any method. Patient navigation shows promise in increasing adherence to colorectal cancer screening and reducing health disparities; however, it is a complex intervention that is operationalized differently across institutions. This article describes 10 key considerations in designing a patient navigation intervention for colorectal cancer screening based on a literature review and environmental scan. Factors include (1) identifying a theoretical framework and setting program goals, (2) specifying community characteristics, (3) establishing the point(s) of intervention within the cancer continuum, (4) determining the setting in which navigation services are provided, (5) identifying the range of services offered and patient navigator responsibilities, (6) determining the background and qualifications of navigators, (7) selecting the method of communications between patients and navigators, (8) designing the navigator training, (9) defining oversight and supervision for the navigators, and (10) evaluating patient navigation. Public health practitioners can benefit from the practical perspective offered here for designing patient navigation programs. © 2013 Society for Public Health Education.

  13. Optimizing the response to surveillance alerts in automated surveillance systems.

    PubMed

    Izadi, Masoumeh; Buckeridge, David L

    2011-02-28

    Although much research effort has been directed toward refining algorithms for disease outbreak alerting, considerably less attention has been given to the response to alerts generated from statistical detection algorithms. Given the inherent inaccuracy in alerting, it is imperative to develop methods that help public health personnel identify optimal policies in response to alerts. This study evaluates the application of dynamic decision making models to the problem of responding to outbreak detection methods, using anthrax surveillance as an example. Adaptive optimization through approximate dynamic programming is used to generate a policy for decision making following outbreak detection. We investigate the degree to which the model can tolerate noise theoretically, in order to keep near optimal behavior. We also evaluate the policy from our model empirically and compare it with current approaches in routine public health practice for investigating alerts. Timeliness of outbreak confirmation and total costs associated with the decisions made are used as performance measures. Using our approach, on average, 80 per cent of outbreaks were confirmed prior to the fifth day of post-attack with considerably less cost compared to response strategies currently in use. Experimental results are also provided to illustrate the robustness of the adaptive optimization approach and to show the realization of the derived error bounds in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Temperature management in haematology patients with febrile neutropenia: a practice survey.

    PubMed

    Weinkove, Robert; Clay, Jennifer; Wood, Catherine

    2013-04-19

    To assess the attitudes of clinicians to temperature management in haematology patients with febrile neutropenia. An online scenario-based survey was circulated to consultant members of the New Zealand branch of the Haematology Society of Australia and New Zealand, to haematology advanced trainees, and to nursing representatives at each haematology department in New Zealand. Eighty-eight responses were obtained, from 34 doctors and 54 nurses. Most respondents would advise a neutropenic patient to take paracetamol as needed for pain. Median temperature intervention threshold for an asymptomatic patient with febrile neutropenia was higher for doctors than for nurses (38.5 versus 38.0 degrees Celcius), despite considerable heterogeneity. Both groups indicated they would intervene at a median 38.0 degrees Celcius for a patient with rigors. Paracetamol was the preferred first-line cooling measure, with physical methods second-line, and pethidine third-line. All respondents favoured oral over intravenous or rectal paracetamol. Most believed a clinical trial of antipyretic treatment for febrile neutropenia was warranted, and indicated willingness to enrol their patients in such a study. This survey documents clinicians' preferred temperature intervention thresholds and methods for haematology patients with neutropenic fever, and shows considerable variation in practice. Most respondents supported a trial of antipyretic management in febrile neutropenia.

  15. Are men well served by family planning programs?

    PubMed

    Hardee, Karen; Croce-Galis, Melanie; Gay, Jill

    2017-01-23

    Although the range of contraceptives includes methods for men, namely condoms, vasectomy and withdrawal that men use directly, and the Standard Days Method (SDM) that requires their participation, family planning programming has primarily focused on women. What is known about reaching men as contraceptive users? This paper draws from a review of 47 interventions that reached men and proposes 10 key considerations for strengthening programming for men as contraceptive users. A review of programming shows that men and boys are not particularly well served by programs. Most programs operate from the perspective that women are contraceptive users and that men should support their partners, with insufficient attention to reaching men as contraceptive users in their own right. The notion that family planning is women's business only is outdated. There is sufficient evidence demonstrating men's desire for information and services, as well as men's positive response to existing programming to warrant further programming for men as FP users. The key considerations focus on getting information and services where men and boys need it; addressing gender norms that affect men's attitudes and use while respecting women's autonomy; reaching adolescent boys; including men as users in policies and guidelines; scaling up successful programming; filling gaps with implementation research and monitoring & evaluation; and creating more contraceptive options for men.

  16. Simple glucose reduction route for one-step synthesis of copper nanofluids

    NASA Astrophysics Data System (ADS)

    Shenoy, U. Sandhya; Shetty, A. Nityananda

    2014-01-01

    One-step method has been employed in the synthesis of copper nanofluids. Copper nitrate is reduced by glucose in the presence of sodium lauryl sulfate. The synthesized particles are characterized by X-ray diffraction technique for the phase structure; electron diffraction X-ray analysis for chemical composition; transmission electron microscopy and field emission scanning electron microscopy for the morphology; Fourier-transform infrared spectroscopy and ultraviolet-visible spectroscopy for the analysis of ingredients of the solution. Thermal conductivity, sedimentation and rheological measurements have also been carried out. It is found that the reaction parameters have considerable effect on the size of the particle formed and rate of the reaction. The techniques confirm that the synthesized particles are copper. The reported method showed promising increase in the thermal conductivity of the base fluid and is found to be reliable, simple and cost-effective method for preparing heat transfer fluids with higher stability.

  17. An efficient method for quantum transport simulations in the time domain

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Yam, C.-Y.; Frauenheim, Th.; Chen, G. H.; Niehaus, T. A.

    2011-11-01

    An approximate method based on adiabatic time dependent density functional theory (TDDFT) is presented, that allows for the description of the electron dynamics in nanoscale junctions under arbitrary time dependent external potentials. The density matrix of the device region is propagated according to the Liouville-von Neumann equation. The semi-infinite leads give rise to dissipative terms in the equation of motion which are calculated from first principles in the wide band limit. In contrast to earlier ab initio implementations of this formalism, the Hamiltonian is here approximated in the spirit of the density functional based tight-binding (DFTB) method. Results are presented for two prototypical molecular devices and compared to full TDDFT calculations. The temporal profile of the current traces is qualitatively well captured by the DFTB scheme. Steady state currents show considerable variations, both in comparison of approximate and full TDDFT, but also among TDDFT calculations with different basis sets.

  18. An unscaled quantum mechanical harmonic force field for p-benzoquinone

    NASA Astrophysics Data System (ADS)

    Nonella, Marco; Tavan, Paul

    1995-10-01

    Structure and harmonic vibrational frequencies of p-benzoquinone have been calculated using quantum chemical ab initio and density functional methods. Our calculations show that a satisfactory description of fundamentals and normal mode compositions is achieved upon consideration of correlation effects by means of Møller-Plesset perturbation expansion (MP2) or by density functional theory (DFT). Furthermore, for correct prediction of CO bondlength and force constant, basis sets augmented by polarization functions are required. Applying such basis sets, MP2 and DFT calculations both give results which are generally in reasonable agreement with experimental data. The quantitatively better agreement, however, is achieved with the computationally less demanding DFT method. This method particularly allows very precise prediction of the experimentally important absorptions in the frequency region between 1500 and 1800 cm -1 and of the isotopic shifts of these vibrations due to 13C or 18O substitution.

  19. An image-processing methodology for extracting bloodstain pattern features.

    PubMed

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Singular perturbations and time scales in the design of digital flight control systems

    NASA Technical Reports Server (NTRS)

    Naidu, Desineni S.; Price, Douglas B.

    1988-01-01

    The results are presented of application of the methodology of Singular Perturbations and Time Scales (SPATS) to the control of digital flight systems. A block diagonalization method is described to decouple a full order, two time (slow and fast) scale, discrete control system into reduced order slow and fast subsystems. Basic properties and numerical aspects of the method are discussed. A composite, closed-loop, suboptimal control system is constructed as the sum of the slow and fast optimal feedback controls. The application of this technique to an aircraft model shows close agreement between the exact solutions and the decoupled (or composite) solutions. The main advantage of the method is the considerable reduction in the overall computational requirements for the evaluation of optimal guidance and control laws. The significance of the results is that it can be used for real time, onboard simulation. A brief survey is also presented of digital flight systems.

  1. Oscillator strengths, first-order properties, and nuclear gradients for local ADC(2).

    PubMed

    Schütz, Martin

    2015-06-07

    We describe theory and implementation of oscillator strengths, orbital-relaxed first-order properties, and nuclear gradients for the local algebraic diagrammatic construction scheme through second order. The formalism is derived via time-dependent linear response theory based on a second-order unitary coupled cluster model. The implementation presented here is a modification of our previously developed algorithms for Laplace transform based local time-dependent coupled cluster linear response (CC2LR); the local approximations thus are state specific and adaptive. The symmetry of the Jacobian leads to considerable simplifications relative to the local CC2LR method; as a result, a gradient evaluation is about four times less expensive. Test calculations show that in geometry optimizations, usually very similar geometries are obtained as with the local CC2LR method (provided that a second-order method is applicable). As an exemplary application, we performed geometry optimizations on the low-lying singlet states of chlorophyllide a.

  2. Automated detection of red lesions from digital colour fundus photographs.

    PubMed

    Jaafar, Hussain F; Nandi, Asoke K; Al-Nuaimy, Waleed

    2011-01-01

    Earliest signs of diabetic retinopathy, the major cause of vision loss, are damage to the blood vessels and the formation of lesions in the retina. Early detection of diabetic retinopathy is essential for the prevention of blindness. In this paper we present a computer-aided system to automatically identify red lesions from retinal fundus photographs. After pre-processing, a morphological technique was used to segment red lesion candidates from the background and other retinal structures. Then a rule-based classifier was used to discriminate actual red lesions from artifacts. A novel method for blood vessel detection is also proposed to refine the detection of red lesions. For a standarised test set of 219 images, the proposed method can detect red lesions with a sensitivity of 89.7% and a specificity of 98.6% (at lesion level). The performance of the proposed method shows considerable promise for detection of red lesions as well as other types of lesions.

  3. Application of statistical classification methods for predicting the acceptability of well-water quality

    NASA Astrophysics Data System (ADS)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-06-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  4. A Penalty Method for the Numerical Solution of Hamilton-Jacobi-Bellman (HJB) Equations in Finance

    NASA Astrophysics Data System (ADS)

    Witte, J. H.; Reisinger, C.

    2010-09-01

    We present a simple and easy to implement method for the numerical solution of a rather general class of Hamilton-Jacobi-Bellman (HJB) equations. In many cases, the considered problems have only a viscosity solution, to which, fortunately, many intuitive (e.g. finite difference based) discretisations can be shown to converge. However, especially when using fully implicit time stepping schemes with their desireable stability properties, one is still faced with the considerable task of solving the resulting nonlinear discrete system. In this paper, we introduce a penalty method which approximates the nonlinear discrete system to an order of O(1/ρ), where ρ>0 is the penalty parameter, and we show that an iterative scheme can be used to solve the penalised discrete problem in finitely many steps. We include a number of examples from mathematical finance for which the described approach yields a rigorous numerical scheme and present numerical results.

  5. Definition of a new thermal contrast and pulse correction for defect quantification in pulsed thermography

    NASA Astrophysics Data System (ADS)

    Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo

    2008-01-01

    It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.

  6. Simulation study on discrete charge effects of SiNW biosensors according to bound target position using a 3D TCAD simulator.

    PubMed

    Chung, In-Young; Jang, Hyeri; Lee, Jieun; Moon, Hyunggeun; Seo, Sung Min; Kim, Dae Hwan

    2012-02-17

    We introduce a simulation method for the biosensor environment which treats the semiconductor and the electrolyte region together, using the well-established semiconductor 3D TCAD simulator tool. Using this simulation method, we conduct electrostatic simulations of SiNW biosensors with a more realistic target charge model where the target is described as a charged cube, randomly located across the nanowire surface, and analyze the Coulomb effect on the SiNW FET according to the position and distribution of the target charges. The simulation results show the considerable variation in the SiNW current according to the bound target positions, and also the dependence of conductance modulation on the polarity of target charges. This simulation method and the results can be utilized for analysis of the properties and behavior of the biosensor device, such as the sensing limit or the sensing resolution.

  7. Effects of inductive bias on computational evaluations of ligand-based modeling and on drug discovery

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2008-03-01

    Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.

  8. [Comparison among various software for LMS growth curve fitting methods].

    PubMed

    Han, Lin; Wu, Wenhong; Wei, Qiuxia

    2015-03-01

    To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.

  9. Development of an Upper Limb Power Assist System Using Pneumatic Actuators for Farming Lift-up Motion

    NASA Astrophysics Data System (ADS)

    Yagi, Eiichi; Harada, Daisuke; Kobayashi, Masaaki

    A power assist system has lately attracted considerable attention to lifting-up an object without low back pain. We have been developing power assist systems with pneumatic actuators for the elbow and shoulder to farming support of lifting-up a bag of rice weighing 30kg. This paper describes the mechanism and control method of this power assist system. The pneumatic rotary actuator supports shoulder motion, and the air cylinder supports elbow motion. In this control method, the surface electromyogram(EMG) signals are used as input information of the controller. The joint support torques of human are calculated based on the antigravity term of necessary joint torques, which are estimated on the dynamics of a human approximated link model. The experimental results show the effectiveness of the proposed mechanism and control method of the power assist system.

  10. Improved collaborative filtering recommendation algorithm of similarity measure

    NASA Astrophysics Data System (ADS)

    Zhang, Baofu; Yuan, Baoping

    2017-05-01

    The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.

  11. Visibility Graph Based Time Series Analysis.

    PubMed

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.

  12. Reconstructing signals from noisy data with unknown signal and noise covariance.

    PubMed

    Oppermann, Niels; Robbers, Georg; Ensslin, Torsten A

    2011-10-01

    We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present.

  13. Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel

    NASA Astrophysics Data System (ADS)

    Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa

    This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.

  14. Assessing the accuracy of globe thermometer method in predicting outdoor mean radiant temperature under Malaysia tropical microclimate

    NASA Astrophysics Data System (ADS)

    Khrit, N. G.; Alghoul, M. A.; Sopian, K.; Lahimer, A. A.; Elayeb, O. K.

    2017-11-01

    Assessing outdoor human thermal comfort and urban climate quality require experimental investigation of microclimatic conditions and their variations in open urban spaces. For this, it is essential to provide quantitative information on air temperature, humidity, wind velocity and mean radiant temperature. These parameters can be quantified directly except mean radiant temperature (Tmrt). The most accurate method to quantify Tmrt is integral radiation measurements (3-D shortwave and long-wave) which require using expensive radiometer instruments. To overcome this limitation the well-known globe thermometer method was suggested to calculate Tmrt. The aim of this study was to assess the possibility of using indoor globe thermometer method in predicting outdoor mean radiant temperature under Malaysia tropical microclimate. Globe thermometer method using small and large sizes of black-painted copper globes (50mm, 150mm) were used to estimate Tmrt and compare it with the reference Tmrt estimated by integral radiation method. The results revealed that the globe thermometer method considerably overestimated Tmrt during the middle of the day and slightly underestimated it in the morning and late evening. The difference between the two methods was obvious when the amount of incoming solar radiation was high. The results also showed that the effect of globe size on the estimated Tmrt is mostly small. Though, the estimated Tmrt by the small globe showed a relatively large amount of scattering caused by rapid changes in radiation and wind speed.

  15. Comparative study on gene set and pathway topology-based enrichment methods.

    PubMed

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.

  16. Searching mixed DNA profiles directly against profile databases.

    PubMed

    Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

    2014-03-01

    DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Possibilistic clustering for shape recognition

    NASA Technical Reports Server (NTRS)

    Keller, James M.; Krishnapuram, Raghu

    1992-01-01

    Clustering methods have been used extensively in computer vision and pattern recognition. Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering in that total commitment of a vector to a given class is not required at each iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from Bezdek's Fuzzy C-Means (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Unfortunately, the memberships resulting from FCM and its derivatives do not correspond to the intuitive concept of degree of belonging, and moreover, the algorithms have considerable trouble in noisy environments. Recently, we cast the clustering problem into the framework of possibility theory. Our approach was radically different from the existing clustering methods in that the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. We constructed an appropriate objective function whose minimum will characterize a good possibilistic partition of the data, and we derived the membership and prototype update equations from necessary conditions for minimization of our criterion function. In this paper, we show the ability of this approach to detect linear and quartic curves in the presence of considerable noise.

  18. Influence of polygonal wear of railway wheels on the wheel set axle stress

    NASA Astrophysics Data System (ADS)

    Wu, Xingwen; Chi, Maoru; Wu, Pingbo

    2015-11-01

    The coupled vehicle/track dynamic model with the flexible wheel set was developed to investigate the effects of polygonal wear on the dynamic stresses of the wheel set axle. In the model, the railway vehicle was modelled by the rigid multibody dynamics. The wheel set was established by the finite element method to analyse the high-frequency oscillation and dynamic stress of wheel set axle induced by the polygonal wear based on the modal stress recovery method. The slab track model was taken into account in which the rail was described by the Timoshenko beam and the three-dimensional solid finite element was employed to establish the concrete slab. Furthermore, the modal superposition method was adopted to calculate the dynamic response of the track. The wheel/rail normal forces and the tangent forces were, respectively, determined by the Hertz nonlinear contact theory and the Shen-Hedrick-Elkins model. Using the coupled vehicle/track dynamic model, the dynamic stresses of wheel set axle with consideration of the ideal polygonal wear and measured polygonal wear were investigated. The results show that the amplitude of wheel/rail normal forces and the dynamic stress of wheel set axle increase as the vehicle speeds rise. Moreover, the impact loads induced by the polygonal wear could excite the resonance of wheel set axle. In the resonance region, the amplitude of the dynamic stress for the wheel set axle would increase considerably comparing with the normal conditions.

  19. Optimization of the pepsin digestion method for anisakids inspection in the fishing industry.

    PubMed

    Llarena-Reino, María; Piñeiro, Carmen; Antonio, José; Outeriño, Luis; Vello, Carlos; González, Ángel F; Pascual, Santiago

    2013-01-31

    During the last 50 years human anisakiasis has been rising while parasites have increased their prevalence at determined fisheries becoming an emergent major public health problem. Although artificial enzymatic digestion procedure by CODEX (STAN 244-2004: standard for salted Atlantic herring and salted sprat) is the recommended protocol for anisakids inspection, no international agreement has been achieved in veterinary and scientific digestion protocols to regulate this growing source of biological hazard in fish products. The aim of this work was to optimize the current artificial digestion protocol by CODEX with the purpose of offering a faster, more useful and safer procedure for factories workers, than the current one for anisakids detection. To achieve these objectives, the existing pepsin chemicals and the conditions of the digestion method were evaluated and assayed in fresh and frozen samples, both in lean and fatty fish species. Results showed that the new digestion procedure considerably reduces the assay time, and it is more handy and efficient (the quantity of the resulting residue was considerably lower after less time) than the widely used CODEX procedure. In conclusion, the new digestion method herein proposed based on liquid pepsin format is an accurate reproducible and user-friendly off-site tool, that can be useful in the implementation of screening programs for the prevention of human anisakiasis (and associated gastroallergic disorders) due to the consumption of raw or undercooked contaminated seafood products. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Predicting the optimal process window for the coating of single-crystalline organic films with mobilities exceeding 7 cm2/Vs.

    NASA Astrophysics Data System (ADS)

    Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric

    2016-09-01

    Organic thin film transistors (OTFTs) based on single crystalline thin films of organic semiconductors have seen considerable development in the recent years. The most successful method for the fabrication of single crystalline films are solution-based meniscus guided coating techniques such as dip-coating, solution shearing or zone casting. These upscalable methods enable rapid and efficient film formation without additional processing steps. The single-crystalline film quality is strongly dependent on solvent choice, substrate temperature and coating speed. So far, however, process optimization has been conducted by trial and error methods, involving, for example, the variation of coating speeds over several orders of magnitude. Through a systematic study of solvent phase change dynamics in the meniscus region, we develop a theoretical framework that links the optimal coating speed to the solvent choice and the substrate temperature. In this way, we can accurately predict an optimal processing window, enabling fast process optimization. Our approach is verified through systematic OTFT fabrication based on films grown with different semiconductors, solvents and substrate temperatures. The use of best predicted coating speeds delivers state of the art devices. In the case of C8BTBT, OTFTs show well-behaved characteristics with mobilities up to 7 cm2/Vs and onset voltages close to 0 V. Our approach also explains well optimal recipes published in the literature. This route considerably accelerates parameter screening for all meniscus guided coating techniques and unveils the physics of single crystalline film formation.

  1. Modeling of heat transfer in a vascular tissue-like medium during an interstitial hyperthermia process.

    PubMed

    Hassanpour, Saeid; Saboonchi, Ahmad

    2016-12-01

    This paper aims to evaluate the role of small vessels in heat transfer mechanisms of a tissue-like medium during local intensive heating processes, for example, an interstitial hyperthermia treatment. To this purpose, a cylindrical tissue with two co- and counter-current vascular networks and a central heat source is introduced. Next, the energy equations of tissue, supply fluid (arterial blood), and return fluid (venous blood) are derived using porous media approach. Then, a 2D computer code is developed to predict the temperature of blood (fluid phase) and tissue (solid phase) by conventional volume averaging method and a more realistic solution method. In latter method, despite the volume averaging the blood of interconnect capillaries is separated from the arterial and venous blood phases. It is found that in addition to blood perfusion rate, the arrangement of vascular network has considerable effects on the pattern and amount of the achieved temperature. In contrast to counter-current network, the co-current network of vessels leads to considerable asymmetric pattern of temperature contours and relocation of heat affected zone along the blood flow direction. However this relocation can be prevented by changing the site of hyperthermia heat source. The results show that the cooling effect of co-current blood vessels during of interstitial heating is more efficient. Despite much anatomical dissimilarities, these findings can be useful in designing of protocols for hyperthermia cancer treatment of living tissue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Desirability function combining metabolic stability and functionality of peptides.

    PubMed

    Van Dorpe, Sylvia; Adriaens, Antita; Vermeire, Simon; Polis, Ingeborgh; Peremans, Kathelijne; Spiegeleer, Bart De

    2011-05-01

    The evaluation of peptides as potential therapeutic or diagnostic agents requires the consideration of several criteria that are targeted around two axes: functionality and metabolic stability. Most often, a compromise has to be made between these mutually opposing characteristics. In this study, Derringer's desirability function, a multi-criteria decision-making method, was applied to determine the best peptide for opioid studies in a single figure-of-merit. The penetration of the blood-brain barrier (BBB) determines the biological functionality of neuropeptides in the brain target tissue, and consists of an influx and an efflux component. The metabolic stability in the two concerned tissues, i.e. plasma and brain, are taken into consideration as well. The overall selection of the peptide drug candidate having the highest BBB-drugability is difficult due to these conflicting responses as well as the different scalings of the four biological parameters under consideration. The highest desirability, representing the best BBB-drugability, was observed for dermorphin. This peptide is thus the most promising drug candidate from the set of eight opioid peptides that were investigated. The least desirable candidate, with the worst BBB influx and/or metabolic stability, was found to be CTAP. Validation of the desirability function by in vivo medical imaging showed that dermorphin and DAMGO penetrate the BBB, whereas EM-1 and TAPP did not. These results are thus consistent with those obtained with the desirability evaluation. To conclude, the multi-criteria decision method was proven to be useful in biomedical research, where a selection of the best candidate based on opposing characteristics is often required. Copyright © 2011 European Peptide Society and John Wiley & Sons, Ltd.

  3. Validating the Operational Bias and Hypothesis of Universal Exponent in Landslide Frequency-Area Distribution

    PubMed Central

    Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung

    2014-01-01

    The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes. PMID:24852019

  4. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  5. Microfluidic Chip-Based Detection and Intraspecies Strain Discrimination of Salmonella Serovars Derived from Whole Blood of Septic Mice

    PubMed Central

    Patterson, Adriana S.; Heithoff, Douglas M.; Ferguson, Brian S.; Soh, H. Tom; Mahan, Michael J.

    2013-01-01

    Salmonella is a zoonotic pathogen that poses a considerable public health and economic burden in the United States and worldwide. Resultant human diseases range from enterocolitis to bacteremia to sepsis and are acutely dependent on the particular serovar of Salmonella enterica subsp. enterica, which comprises over 99% of human-pathogenic S. enterica isolates. Point-of-care methods for detection and strain discrimination of Salmonella serovars would thus have considerable benefit to medical, veterinary, and field applications that safeguard public health and reduce industry-associated losses. Here we describe a single, disposable microfluidic chip that supports isothermal amplification and sequence-specific detection and discrimination of Salmonella serovars derived from whole blood of septic mice. The integrated microfluidic electrochemical DNA (IMED) chip consists of an amplification chamber that supports loop-mediated isothermal amplification (LAMP), a rapid, single-temperature amplification method as an alternative to PCR that offers advantages in terms of sensitivity, reaction speed, and amplicon yield. The amplification chamber is connected via a microchannel to a detection chamber containing a reagentless, multiplexed (here biplex) sensing array for sequence-specific electrochemical DNA (E-DNA) detection of the LAMP products. Validation of the IMED device was assessed by the detection and discrimination of S. enterica subsp. enterica serovars Typhimurium and Choleraesuis, the causative agents of enterocolitis and sepsis in humans, respectively. IMED chips conferred rapid (under 2 h) detection and discrimination of these strains at clinically relevant levels (<1,000 CFU/ml) from whole, unprocessed blood collected from septic animals. The IMED-based chip assay shows considerable promise as a rapid, inexpensive, and portable point-of-care diagnostic platform for the detection and strain-specific discrimination of microbial pathogens. PMID:23354710

  6. METHODS FOR INTEGRATING ENVIRONMENTAL CONSIDERATIONS INTO CHEMICAL PROCESS DESIGN DECISIONS

    EPA Science Inventory

    The objective of this cooperative agreement was to postulate a means by which an engineer could routinely include environmental considerations in day-to-day conceptual design problems; a means that could easily integrate with existing design processes, and thus avoid massive retr...

  7. Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure

    NASA Astrophysics Data System (ADS)

    Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori

    In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.

  8. Predicting protein contact map using evolutionary and physical constraints by integer programming.

    PubMed

    Wang, Zhiyong; Xu, Jinbo

    2013-07-01

    Protein contact map describes the pairwise spatial and functional relationship of residues in a protein and contains key information for protein 3D structure prediction. Although studied extensively, it remains challenging to predict contact map using only sequence information. Most existing methods predict the contact map matrix element-by-element, ignoring correlation among contacts and physical feasibility of the whole-contact map. A couple of recent methods predict contact map by using mutual information, taking into consideration contact correlation and enforcing a sparsity restraint, but these methods demand for a very large number of sequence homologs for the protein under consideration and the resultant contact map may be still physically infeasible. This article presents a novel method PhyCMAP for contact map prediction, integrating both evolutionary and physical restraints by machine learning and integer linear programming. The evolutionary restraints are much more informative than mutual information, and the physical restraints specify more concrete relationship among contacts than the sparsity restraint. As such, our method greatly reduces the solution space of the contact map matrix and, thus, significantly improves prediction accuracy. Experimental results confirm that PhyCMAP outperforms currently popular methods no matter how many sequence homologs are available for the protein under consideration. http://raptorx.uchicago.edu.

  9. Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.

    PubMed

    Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue

    2015-01-01

    As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.

  10. Self-Developed Testing System for Determining the Temperature Behavior of Concrete.

    PubMed

    Zhu, He; Li, Qingbin; Hu, Yu

    2017-04-16

    Cracking due to temperature and restraint in mass concrete is an important issue. A temperature stress testing machine (TSTM) is an effective test method to study the mechanism of temperature cracking. A synchronous closed loop federated control TSTM system has been developed by adopting the design concepts of a closed loop federated control, a detachable mold design, a direct measuring deformation method, and a temperature deformation compensation method. The results show that the self-developed system has the comprehensive ability of simulating different restraint degrees, multiple temperature and humidity modes, and closed-loop control of multi-TSTMs during one test period. Additionally, the direct measuring deformation method can obtain a more accurate deformation and restraint degree result with little local damage. The external temperature deformation affecting the concrete specimen can be eliminated by adopting the temperature deformation compensation method with different considerations of steel materials. The concrete quality of different TSTMs can be guaranteed by being vibrated on the vibrating stand synchronously. The detachable mold design and assembled method has greatly overcome the difficulty of eccentric force and deformation.

  11. Self-Developed Testing System for Determining the Temperature Behavior of Concrete

    PubMed Central

    Zhu, He; Li, Qingbin; Hu, Yu

    2017-01-01

    Cracking due to temperature and restraint in mass concrete is an important issue. A temperature stress testing machine (TSTM) is an effective test method to study the mechanism of temperature cracking. A synchronous closed loop federated control TSTM system has been developed by adopting the design concepts of a closed loop federated control, a detachable mold design, a direct measuring deformation method, and a temperature deformation compensation method. The results show that the self-developed system has the comprehensive ability of simulating different restraint degrees, multiple temperature and humidity modes, and closed-loop control of multi-TSTMs during one test period. Additionally, the direct measuring deformation method can obtain a more accurate deformation and restraint degree result with little local damage. The external temperature deformation affecting the concrete specimen can be eliminated by adopting the temperature deformation compensation method with different considerations of steel materials. The concrete quality of different TSTMs can be guaranteed by being vibrated on the vibrating stand synchronously. The detachable mold design and assembled method has greatly overcome the difficulty of eccentric force and deformation. PMID:28772778

  12. The effect of evaporative air chilling and storage temperature on quality and shelf life of fresh chicken carcasses.

    PubMed

    Mielnik, M B; Dainty, R H; Lundby, F; Mielnik, J

    1999-07-01

    The effect of evaporative air chilling on quality of fresh chicken carcasses was compared with air chilling as reference method. Cooling efficiency and total heat loss were significantly higher for evaporative air chilling. The chilling method was of great importance for weight loss. Chicken chilled in cold air lost considerably more weight than chicken cooled by evaporative air chilling; the difference was 1.8%. The chilling method also affected the skin color and the amount of moisture on skin surface. After evaporative air chilling, the chicken carcasses had a lighter color and more water on the back and under the wings. The moisture content in skin and meat, cooking loss, and pH were not affected by chilling method. Odor attributes of raw chicken and odor and flavor attributes of cooked chicken did not show any significant differences between the two chilling methods. The shelf life of chicken stored at 4 and -1 C were not affected significantly by chilling method. Storage time and temperature appeared to be the decisive factors for sensory and microbiological quality of fresh chicken carcasses.

  13. A new measuring method for motion accuracy of 3-axis NC equipments based on composite trajectory of circle and non-circle

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Du, Zhengchun; Yang, Jiangguo; Hong, Maisheng

    2011-12-01

    Geometric motion error measurement has been considered as an important task for accuracy enhancement and quality assurance of NC machine tools and CMMs. In consideration of the disadvantages of traditional measuring methods,a new measuring method for motion accuracy of 3-axis NC equipments based on composite trajectory including circle and non-circle(straight line and/or polygonal line) is proposed. The principles and techniques of the new measuring method are discussed in detail. 8 feasible measuring strategies based on different measuring groupings are summarized and optimized. The experiment of the most preferable strategy is carried out on the 3-axis CNC vertical machining center Cincinnati 750 Arrow by using cross grid encoder. The whole measuring time of 21 error components of the new method is cut down to 1-2 h because of easy installation, adjustment, operation and the characteristics of non-contact measurement. Result shows that the new method is suitable for `on machine" measurement and has good prospects of wide application.

  14. MLACP: machine-learning-based prediction of anticancer peptides

    PubMed Central

    Manavalan, Balachandran; Basith, Shaherin; Shin, Tae Hwan; Choi, Sun; Kim, Myeong Ok; Lee, Gwang

    2017-01-01

    Cancer is the second leading cause of death globally, and use of therapeutic peptides to target and kill cancer cells has received considerable attention in recent years. Identification of anticancer peptides (ACPs) through wet-lab experimentation is expensive and often time consuming; therefore, development of an efficient computational method is essential to identify potential ACP candidates prior to in vitro experimentation. In this study, we developed support vector machine- and random forest-based machine-learning methods for the prediction of ACPs using the features calculated from the amino acid sequence, including amino acid composition, dipeptide composition, atomic composition, and physicochemical properties. We trained our methods using the Tyagi-B dataset and determined the machine parameters by 10-fold cross-validation. Furthermore, we evaluated the performance of our methods on two benchmarking datasets, with our results showing that the random forest-based method outperformed the existing methods with an average accuracy and Matthews correlation coefficient value of 88.7% and 0.78, respectively. To assist the scientific community, we also developed a publicly accessible web server at www.thegleelab.org/MLACP.html. PMID:29100375

  15. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  16. A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges

    PubMed Central

    Asgari, B.; Osman, S. A.; Adnan, A.

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400

  17. A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.

    PubMed

    Asgari, B; Osman, S A; Adnan, A

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.

  18. Copper Nanowires and Their Applications for Flexible, Transparent Conducting Films: A Review

    PubMed Central

    Nam, Vu Binh; Lee, Daeho

    2016-01-01

    Cu nanowires (NWs) are attracting considerable attention as alternatives to Ag NWs for next-generation transparent conductors, replacing indium tin oxide (ITO) and micro metal grids. Cu NWs hold great promise for low-cost fabrication via a solution-processed route and show preponderant optical, electrical, and mechanical properties. In this study, we report a summary of recent advances in research on Cu NWs, covering the optoelectronic properties, synthesis routes, deposition methods to fabricate flexible transparent conducting films, and their potential applications. This review also examines the approaches on protecting Cu NWs from oxidation in air environments. PMID:28344304

  19. Design of distributed systems of hydrolithosphere processes management. A synthesis of distributed management systems

    NASA Astrophysics Data System (ADS)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.

  20. Crisis as opportunity: international health work during the economic depression.

    PubMed

    Borowy, Iris

    2008-01-01

    The economic depression of the 1930s represented the most important economic and social crisis of its time. Surprisingly, its effect on health did not show in available morbidity and mortality rates. In 1932, the League of Nations Health Organisation embarked on a six-point program addressing statistical methods of measuring the effect and its influence on mental health and nutrition and establishing ways to safeguard public health through more efficient health systems. Some of these studies resulted in considerations of general relevance beyond crisis management. Unexpectedly, the crisis offered an opportunity to reconsider key concepts of individual and public health.

  1. Efficient many-party controlled teleportation of multiqubit quantum information via entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Chuiping; Department of Chemistry, University of Kansas, and Kansas Center for Advanced Scientific Computing, Lawrence, Kansas 66045; Chu, Shih-I

    2004-08-01

    We present a way to teleport multiqubit quantum information from a sender to a distant receiver via the control of many agents in a network. We show that the original state of each qubit can be restored by the receiver as long as all the agents collaborate. However, even if one agent does not cooperate, the receiver cannot fully recover the original state of each qubit. The method operates essentially through entangling quantum information during teleportation, in such a way that the required auxiliary qubit resources, local operation, and classical communication are considerably reduced for the present purpose.

  2. Equivalent circuit consideration of frequency-shift-type acceleration sensor

    NASA Astrophysics Data System (ADS)

    Sasaki, Yoshifumi; Sugawara, Sumio; Kudo, Subaru

    2018-07-01

    In this paper, an electrical equivalent circuit for the piezoelectrically driven frequency-shift-type acceleration sensor model is represented, and the equivalent circuit constants including the effect of the axial force are clarified for the first time. The results calculated by the finite element method are compared with the experimentally measured ones of the one-axis sensor of trial production. The result shows that the analyzed values almost agree with the measured ones, and that the equivalent circuit representation of the sensor is useful for electrical engineers in order to easily analyze the characteristics of the sensors.

  3. Numerical modeling of the strain of elastic rubber elements

    NASA Astrophysics Data System (ADS)

    Moskvichev, E. N.; Porokhin, A. V.; Shcherbakov, I. V.

    2017-11-01

    A comparative analysis of the results of experimental investigation of mechanical behavior of the rubber sample during biaxial compression testing and numerical simulation results obtained by the finite element method was carried out to determine the correctness of the model applied in the engineering calculations of elastic structural elements made of the rubber. The governing equation represents the five-parameter Mooney-Rivlin model with the constants determined from experimental data. The investigation results showed that these constants reliably describe the mechanical behavior of the material under consideration. The divergence of experimental and numerical results does not exceed 15%.

  4. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  5. Optimization of the time-dependent traveling salesman problem with Monte Carlo methods.

    PubMed

    Bentner, J; Bauer, G; Obermair, G M; Morgenstern, I; Schneider, J

    2001-09-01

    A problem often considered in operations research and computational physics is the traveling salesman problem, in which a traveling salesperson has to find the shortest closed tour between a certain set of cities. This problem has been extended to more realistic scenarios, e.g., the "real" traveling salesperson has to take rush hours into consideration. We will show how this extended problem is treated with physical optimization algorithms. We will present results for a specific instance of Reinelt's library TSPLIB95, in which we define a zone with traffic jams in the afternoon.

  6. Eddy-current effect on resonant magnetoelectric coupling in magnetostrictive-piezoelectric laminated composites

    NASA Astrophysics Data System (ADS)

    Liu, Guoxi; Zhang, Chunli; Chen, Weiqiu; Dong, Shuxiang

    2013-07-01

    An analytical model of resonant magnetoelectric (ME) coupling in magnetostrictive (MS)-piezoelectric (PE) laminated composites in consideration of eddy-current effect in MS layer using equivalent circuit method is presented. Numerical calculations show that: (1) the eddy-current has a strong effect on ME coupling in MS-PE laminated composites at resonant frequency; and (2) the resonant ME coupling is then significantly dependent on the sizes of ME laminated composites, which were neglected in most previous theoretical analyses. The achieved results provide a theoretical guidance for the practice engineering design, manufacture, and application of ME laminated composites and devices.

  7. The muscle biopsy technique. Historical and methodological considerations.

    PubMed

    Ekblom, B

    2017-05-01

    The muscle biopsy method is an important tool for clinical and scientific work. In this study, the two most used instruments, the Bergström needle and the Well-Blakesley conchotome, are described. The technique of using those instruments, risks, and other considerations are discussed. Finally, a few consequences and the error of the method for determining muscle fiber type, fiber area, substrates, and metabolites are presented. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Active learning based segmentation of Crohns disease from abdominal MRI.

    PubMed

    Mahapatra, Dwarikanath; Vos, Franciscus M; Buhmann, Joachim M

    2016-05-01

    This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. An effective method for thallium bromide purification and research on crystal properties

    NASA Astrophysics Data System (ADS)

    Zheng, Zhiping; Meng, Fang; Gong, Shuping; Quan, Lin; Wang, Jing; Zhou, Dongxiang

    2012-06-01

    Thallium bromide (TlBr) is a promising candidate for room-temperature X- and gamma-ray detectors in view of its excellent intrinsic features. However, material purity and crystal quality concerns still limit the use of TlBr crystals as detectors. In this work, a combination of hydrothermal recrystallization (HR) and vacuum distillation (VD) methods were applied to purify TlBr salts prior to crystal growth. Trace impurities at the ppb/ppm level were determined by inductively coupled plasma mass spectroscopy (ICP-MS). The results showed that the impurity concentrations of the TlBr salt decreased significantly after HR and VD purification, and high performance of the resultant TlBr crystal in areas such as electrical and optical properties was achieved. The combination of HR and VD methods could fabricate purer material, with an order of magnitude higher resistivity and better optical quality, than HR or VD method used separately. The possible technological considerations affecting the parameters of the crystals are investigated.

  10. Clustering Categorical Data Using Community Detection Techniques

    PubMed Central

    2017-01-01

    With the advent of the k-modes algorithm, the toolbox for clustering categorical data has an efficient tool that scales linearly in the number of data items. However, random initialization of cluster centers in k-modes makes it hard to reach a good clustering without resorting to many trials. Recently proposed methods for better initialization are deterministic and reduce the clustering cost considerably. A variety of initialization methods differ in how the heuristics chooses the set of initial centers. In this paper, we address the clustering problem for categorical data from the perspective of community detection. Instead of initializing k modes and running several iterations, our scheme, CD-Clustering, builds an unweighted graph and detects highly cohesive groups of nodes using a fast community detection technique. The top-k detected communities by size will define the k modes. Evaluation on ten real categorical datasets shows that our method outperforms the existing initialization methods for k-modes in terms of accuracy, precision, and recall in most of the cases. PMID:29430249

  11. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  12. Stability-Derivative Determination from Flight Data

    NASA Technical Reports Server (NTRS)

    Holowicz, Chester H.; Holleman, Euclid C.

    1958-01-01

    A comprehensive discussion of the various factors affecting the determination of stability and control derivatives from flight data is presented based on the experience of the NASA High-Speed Flight Station. Factors relating to test techniques, determination of mass characteristics, instrumentation, and methods of analysis are discussed. For most longitudinal-stability-derivative analyses simple equations utilizing period and damping have been found to be as satisfactory as more comprehensive methods. The graphical time-vector method has been the basis of lateral-derivative analysis, although simple approximate methods can be useful If applied with caution. Control effectiveness has been generally obtained by relating the peak acceleration to the rapid control input, and consideration must be given to aerodynamic contributions if reasonable accuracy is to be realized.. Because of the many factors involved In the determination of stability derivatives, It is believed that the primary stability and control derivatives are probably accurate to within 10 to 25 percent, depending upon the specific derivative. Static-stability derivatives at low angle of attack show the greatest accuracy.

  13. BIMLR: a method for constructing rooted phylogenetic networks from rooted phylogenetic trees.

    PubMed

    Wang, Juan; Guo, Maozu; Xing, Linlin; Che, Kai; Liu, Xiaoyan; Wang, Chunyu

    2013-09-15

    Rooted phylogenetic trees constructed from different datasets (e.g. from different genes) are often conflicting with one another, i.e. they cannot be integrated into a single phylogenetic tree. Phylogenetic networks have become an important tool in molecular evolution, and rooted phylogenetic networks are able to represent conflicting rooted phylogenetic trees. Hence, the development of appropriate methods to compute rooted phylogenetic networks from rooted phylogenetic trees has attracted considerable research interest of late. The CASS algorithm proposed by van Iersel et al. is able to construct much simpler networks than other available methods, but it is extremely slow, and the networks it constructs are dependent on the order of the input data. Here, we introduce an improved CASS algorithm, BIMLR. We show that BIMLR is faster than CASS and less dependent on the input data order. Moreover, BIMLR is able to construct much simpler networks than almost all other methods. BIMLR is available at http://nclab.hit.edu.cn/wangjuan/BIMLR/. © 2013 Elsevier B.V. All rights reserved.

  14. Molecularly imprinted membrane extraction combined with high-performance liquid chromatography for selective analysis of cloxacillin from shrimp samples.

    PubMed

    Du, Wei; Sun, Min; Guo, Pengqi; Chang, Chun; Fu, Qiang

    2018-09-01

    Nowadays, the abuse of antibiotics in aquaculture has generated considerable problems for food safety. Therefore, it is imperative to develop a simple and selective method for monitoring illegal use of antibiotics in aquatic products. In this study, a method combined molecularly imprinted membranes (MIMs) extraction and liquid chromatography was developed for the selective analysis of cloxacillin from shrimp samples. The MIMs was synthesized by UV photopolymerization, and characterized by scanning electron microscope, Fourier transform infrared spectra, thermo-gravimetric analysis and swelling test. The results showed that the MIMs exhibited excellent permselectivity, high adsorption capacity and fast adsorption rate for cloxacillin. Finally, the method was utilized to determine cloxacillin from shrimp samples, with good accuracies and acceptable relative standard deviation values for precision. The proposed method was a promising alternative for selective analysis of cloxacillin in shrimp samples, due to the easy-operation and excellent selectivity. Copyright © 2018. Published by Elsevier Ltd.

  15. A hybrid approach EMD-HW for short-term forecasting of daily stock market time series data

    NASA Astrophysics Data System (ADS)

    Awajan, Ahmad Mohd; Ismail, Mohd Tahir

    2017-08-01

    Recently, forecasting time series has attracted considerable attention in the field of analyzing financial time series data, specifically within the stock market index. Moreover, stock market forecasting is a challenging area of financial time-series forecasting. In this study, a hybrid methodology between Empirical Mode Decomposition with the Holt-Winter method (EMD-HW) is used to improve forecasting performances in financial time series. The strength of this EMD-HW lies in its ability to forecast non-stationary and non-linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy and offers a new forecasting method in time series. The daily stock market time series data of 11 countries is applied to show the forecasting performance of the proposed EMD-HW. Based on the three forecast accuracy measures, the results indicate that EMD-HW forecasting performance is superior to traditional Holt-Winter forecasting method.

  16. An optimization method for defects reduction in fiber laser keyhole welding

    NASA Astrophysics Data System (ADS)

    Ai, Yuewei; Jiang, Ping; Shao, Xinyu; Wang, Chunming; Li, Peigen; Mi, Gaoyang; Liu, Yang; Liu, Wei

    2016-01-01

    Laser welding has been widely used in automotive, power, chemical, nuclear and aerospace industries. The quality of welded joints is closely related to the existing defects which are primarily determined by the welding process parameters. This paper proposes a defects optimization method that takes the formation mechanism of welding defects and weld geometric features into consideration. The analysis of welding defects formation mechanism aims to investigate the relationship between welding defects and process parameters, and weld features are considered to identify the optimal process parameters for the desired welded joints with minimum defects. The improved back-propagation neural network possessing good modeling for nonlinear problems is adopted to establish the mathematical model and the obtained model is solved by genetic algorithm. The proposed method is validated by macroweld profile, microstructure and microhardness in the confirmation tests. The results show that the proposed method is effective at reducing welding defects and obtaining high-quality joints for fiber laser keyhole welding in practical production.

  17. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  18. A study of extraction process and in vitro antioxidant activity of total phenols from Rhizoma Imperatae.

    PubMed

    Zhou, Xian-rong; Wang, Jian-hua; Jiang, Bo; Shang, Jin; Zhao, Chang-qiong

    2013-01-01

    The study investigated the extraction method of Rhizoma Imperatae and its antioxidant activity, and provided a basis for its rational development. The extraction method of Rhizoma Imperatae was determined using orthogonal design test and by total phenol content, its hydroxyl radical scavenging ability was measured by Fenton reaction, and potassium ferricyanide reduction method was used to determine its reducing power. The results showed that the optimum extraction process of Rhizoma Imperatae was a 50-fold volume of water, 30 °C, three times of extraction with 2 h each. Its IC50 for scavenging of hydroxyl radicals was 0.0948 mg/mL, while IC50 of ascorbic acid was 0.1096 mg/mL; in the ferricyanide considerable reduction method, the extract exhibited reducing power comparable to that of the ascorbic acid. The study concluded that Rhizoma Imperatae extract contains relatively large amount of polyphenols, and has a good anti-oxidation ability.

  19. Application of Gumbel I and Monte Carlo methods to assess seismic hazard in and around Pakistan

    NASA Astrophysics Data System (ADS)

    Rehman, Khaista; Burton, Paul W.; Weatherill, Graeme A.

    2018-05-01

    A proper assessment of seismic hazard is of considerable importance in order to achieve suitable building construction criteria. This paper presents probabilistic seismic hazard assessment in and around Pakistan (23° N-39° N; 59° E-80° E) in terms of peak ground acceleration (PGA). Ground motion is calculated in terms of PGA for a return period of 475 years using a seismogenic-free zone method of Gumbel's first asymptotic distribution of extreme values and Monte Carlo simulation. Appropriate attenuation relations of universal and local types have been used in this study. The results show that for many parts of Pakistan, the expected seismic hazard is relatively comparable with the level specified in the existing PGA maps.

  20. How can acute mountain sickness be quantified at moderate altitude?

    PubMed

    Roeggla, G; Roeggla, M; Podolsky, A; Wagner, A; Laggner, A N

    1996-03-01

    Reports of acute mountain sickness (AMS) at moderate altitude show a wide variability, possibly because of different investigation methods. The aim of our study was to investigate the impact of investigation methods on AMS incidence. Hackett's established AMS score (a structured interview and physical examination), the new Lake Louise AMS score (a self-reported questionnaire) and oxygen saturation were determined in 99 alpinists after ascent to 2.94 km altitude. AMS incidence was 8% in Hackett's AMS score and 25% in the Lake Louise AMS score. Oxygen saturation correlated inversely with Hackett's AMS score with no significant correlation with the Lake Louise AMS score. At moderate altitude, the new Lake Louise AMS score overestimates AMS incidence considerably. Hackett's AMS score remains the gold standard for evaluating AMS incidence.

  1. Efficient shortcut techniques in evanescently coupled waveguides

    NASA Astrophysics Data System (ADS)

    Paul, Koushik; Sarma, Amarendra K.

    2016-10-01

    Shortcut to Adiabatic Passage (SHAPE) technique, in the context of coherent control of atomic systems has gained considerable attention in last few years. It is primarily because of its ability to manipulate population among the quantum states infinitely fast compared to the adiabatic processes. Two methods in this regard have been explored rigorously, namely the transitionless quantum driving and the Lewis-Riesenfeld invariant approach. We have applied these two methods to realize SHAPE in adiabatic waveguide coupler. Waveguide couplers are integral components of photonic circuits, primarily used as switching devices. Our study shows that with appropriate engineering of the coupling coefficient and propagation constants of the coupler it is possible to achieve efficient and complete power switching. We also observed that the coupler length could be reduced significantly without affecting the coupling efficiency of the system.

  2. Effect of ion implantation on the tribology of metal-on-metal hip prostheses.

    PubMed

    Bowsher, John G; Hussain, Azad; Williams, Paul; Nevelos, Jim; Shelton, Julia C

    2004-12-01

    Nitrogen ion implantation (which considerably hardens the surface of the bearing) may represent one possible method of reducing the wear of metal-on-metal (MOM) hip bearings. Currently there are no ion-implanted MOM bearings used clinically. Therefore a physiological hip simulator test was undertaken using standard test conditions, and the results compared to previous studies using the same methods. N2-ion implantation of high carbon cast Co-Cr-Mo-on-Co-Cr-Mo hip prostheses increased wear by 2-fold during the aggressive running-in phase compared to untreated bearing surfaces, plus showing no wear reductions during steady-state conditions. Although 2 specimens were considered in the current study, it would appear that ion implantation has no clinical benefit for MOM.

  3. How can acute mountain sickness be quantified at moderate altitude?

    PubMed Central

    Roeggla, G; Roeggla, M; Podolsky, A; Wagner, A; Laggner, A N

    1996-01-01

    Reports of acute mountain sickness (AMS) at moderate altitude show a wide variability, possibly because of different investigation methods. The aim of our study was to investigate the impact of investigation methods on AMS incidence. Hackett's established AMS score (a structured interview and physical examination), the new Lake Louise AMS score (a self-reported questionnaire) and oxygen saturation were determined in 99 alpinists after ascent to 2.94 km altitude. AMS incidence was 8% in Hackett's AMS score and 25% in the Lake Louise AMS score. Oxygen saturation correlated inversely with Hackett's AMS score with no significant correlation with the Lake Louise AMS score. At moderate altitude, the new Lake Louise AMS score overestimates AMS incidence considerably. Hackett's AMS score remains the gold standard for evaluating AMS incidence. PMID:8683517

  4. An experimental study of wall adaptation and interference assessment using Cauchy integral formula

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1991-01-01

    This paper summarizes the results of an experimental study of combined wall adaptation and residual interference assessment using the Cauchy integral formula. The experiments were conducted on a supercritical airfoil model in the Langley 0.3-m Transonic Cryogenic Tunnel solid flexible wall test section. The ratio of model chord to test section height was about 0.7. The method worked satisfactorily in reducing the blockage interference and demonstrated the primary requirement for correcting for the blockage effects at high model incidences to correctly determine high lift characteristics. The studies show that the method has potential for reducing the residual interference to considerably low levels. However, corrections to blockage and upwash velocities gradients may still be required for the final adapted wall shapes.

  5. Aircraft engine pollution reduction.

    NASA Technical Reports Server (NTRS)

    Rudey, R. A.

    1972-01-01

    The effect of engine operation on the types and levels of the major aircraft engine pollutants is described and the major factors governing the formation of these pollutants during the burning of hydrocarbon fuel are discussed. Methods which are being explored to reduce these pollutants are discussed and their application to several experimental research programs are pointed out. Results showing significant reductions in the levels of carbon monoxide, unburned hydrocarbons, and oxides of nitrogen obtained from experimental combustion research programs are presented and discussed to point out potential application to aircraft engines. An experimental program designed to develop and demonstrate these and other advanced, low pollution combustor design methods is described. Results that have been obtained to date indicate considerable promise for reducing advanced engine exhaust pollutants to levels significantly below current engines.

  6. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  7. SPH investigation of the thermal effects on the fluid mixing in a microchannel with rotating stirrers

    NASA Astrophysics Data System (ADS)

    Shamsoddini, Rahim

    2018-04-01

    An incompressible smoothed particle hydrodynamics algorithm is proposed to model and investigate the thermal effect on the mixing rate of an active micromixer in which the rotating stirrers enhance the mixing rate. In liquids, mass diffusion increases with increasing temperature, while viscosity decreases; so, the local Schmidt number decreases considerably with increasing temperature. The present study investigates the effect of wall temperature on mixing rate with an improved SPH method. The robust SPH method used in the present work is equipped with a shifting algorithm and renormalization tensors. By introducing this new algorithm, the several mass, momentum, energy, and concentration equations are solved. The results, discussed for different temperature ratios, show that mixing rate increases significantly with increased temperature ratio.

  8. Numerical evaluation of implantable hearing devices using a finite element model of human ear considering viscoelastic properties.

    PubMed

    Zhang, Jing; Tian, Jiabin; Ta, Na; Huang, Xinsheng; Rao, Zhushi

    2016-08-01

    Finite element method was employed in this study to analyze the change in performance of implantable hearing devices due to the consideration of soft tissues' viscoelasticity. An integrated finite element model of human ear including the external ear, middle ear and inner ear was first developed via reverse engineering and analyzed by acoustic-structure-fluid coupling. Viscoelastic properties of soft tissues in the middle ear were taken into consideration in this model. The model-derived dynamic responses including middle ear and cochlea functions showed a better agreement with experimental data at high frequencies above 3000 Hz than the Rayleigh-type damping. On this basis, a coupled finite element model consisting of the human ear and a piezoelectric actuator attached to the long process of incus was further constructed. Based on the electromechanical coupling analysis, equivalent sound pressure and power consumption of the actuator corresponding to viscoelasticity and Rayleigh damping were calculated using this model. The analytical results showed that the implant performance of the actuator evaluated using a finite element model considering viscoelastic properties gives a lower output above about 3 kHz than does Rayleigh damping model. Finite element model considering viscoelastic properties was more accurate to numerically evaluate implantable hearing devices. © IMechE 2016.

  9. An example of problems in dose reconstruction from doses formed by electromagnetic irradiation by different energy sources.

    PubMed

    Kirillov, Vladimir; Kuchuro, Joseph; Tolstik, Sergey; Leonova, Tatyana

    2010-02-01

    Dose reconstruction for citizens of Belarus affected by the Chernobyl accident showed an unexpectedly wide range of doses. Using the EPR tooth enamel dosimetry method, it has been demonstrated that when the tooth enamel dose was formed due to x-rays with effective energy of 34 keV and the additional irradiation of enamel samples was performed by gamma radiation with mean energy of 1,250 keV, it led to a considerable increase in the reconstructed absorbed dose as compared with the applied. In the case when the dose was formed due to gamma radiation and the additional irradiation was performed by x-rays, it led to a considerable decrease in the reconstructed dose as compared with the applied. When the dose formation and the additional irradiation were carried out from external sources of electromagnetic radiation of equal energy, the reconstructed dose value was close to that of the applied. The obtained data show that for adequate reconstruction of individual absorbed doses by the EPR tooth enamel spectra, it is necessary to take into account the contribution from diagnostic x-ray examination of the teeth, jaw, and skull of some individuals who were exposed to a combined effect of the external gamma radiation and x-rays.

  10. Antioxidant Compounds in Traditional Indian Pickles May Prevent the Process-Induced Formation of Benzene.

    PubMed

    Kharat, Mahesh M; Adiani, Vanshika; Variyar, Prasad; Sharma, Arun; Singhal, Rekha S

    2016-01-01

    Pickles in the Indian market contain ascorbic acid from the raw material used and benzoate as an added preservative that are involved in the formation of benzene in soft drinks. In this work, 24 market pickle samples were surveyed for benzene content, as well as its precursors and other constituents that influence its formation. The analysis showed that pickle samples were high in acid content (low pH) and showed significant amount of ascorbic acid, minerals (Cu and Fe), and benzoic acid present in them. Also, most samples exhibited high antioxidant activity that might be attributed to the ingredients used, such as fruits and spices. The solid-phase microextraction headspace gas chromatography-mass spectrometry method was developed in-house for benzene analysis. Eleven of 24 samples had benzene, with the highest concentration of 4.36 ± 0.82 μg of benzene per kg of pickle for a lime pickle that was also reported to have highest benzoic acid and considerably less hydroxyl radical ((•)OH) scavenging activity. However, benzene levels for all 11 samples were considerably below the World Health Organization regulatory limit of 10 μg/kg for benzene in mineral water. Studies on model systems revealed that the high antioxidant activity of Indian pickles may have had a strong inhibitory effect on benzene formation.

  11. Innovation of natural essential oil-loaded Orabase for local treatment of oral candidiasis

    PubMed Central

    Labib, Gihan S; Aldawsari, Hibah

    2015-01-01

    Purpose Oral candidiasis may be manifested in the oral cavity as either mild or severe oral fungal infection. This infection results from the overgrowth of Candida species normally existing in the oral cavity in minute amounts based on many predisposing factors. Several aspects have spurred the search for new strategies in the treatment of oral candidiasis, among which are the limited numbers of new antifungal drugs developed in recent years. Previous studies have shown that thyme and clove oils have antimycotic activities and have suggested their incorporation into pharmaceutical preparations. This study aimed to investigate the possibility of the incorporation and characterization of essential oils or their extracted active ingredients in Orabase formulations. Methods Orabase loaded with clove oil, thyme oil, eugenol, and thymol were prepared and evaluated for their antifungal activities, pH, viscosity, erosion and water uptake characteristics, mechanical properties, in vitro release behavior, and ex vivo mucoadhesion properties. Results All prepared bases showed considerable antifungal activity and acceptable physical characteristics. The release pattern from loaded bases was considerably slow for all oils and active ingredients. All bases showed appreciable adhesion in the in vitro and ex vivo studies. Conclusion The incorporation of essential oils in Orabase could help in future drug delivery design, with promising outcomes on patients’ well-being. PMID:26170621

  12. Inactivation of Mycobacterium avium subsp. paratuberculosis during cooking of hamburger patties.

    PubMed

    Hammer, Philipp; Walte, Hans-Georg C; Matzen, Sönke; Hensel, Jann; Kiesner, Christian

    2013-07-01

    The role of Mycobacterium avium subsp. paratuberculosis (MAP) in Crohn's disease in humans has been debated for many years. Milk and milk products have been suggested as possible vectors for transmission since the beginning of this debate, whereas recent publications show that slaughtered cattle and their carcasses, meat, and organs can also serve as reservoirs for MAP transmission. The objective of this study was to generate heat-inactivation data for MAP during the cooking of hamburger patties. Hamburger patties of lean ground beef weighing 70 and 50 g were cooked for 6, 5, 4, 3, and 2 min, which were sterilized by irradiation and spiked with three different MAP strains at levels between 10² and 10⁶ CFU/ml. Single-sided cooking with one flip was applied, and the temperatures within the patties were recorded by seven thermocouples. Counting of the surviving bacteria was performed by direct plating onto Herrold's egg yolk medium and a three-vial most-probable-number method by using modified Dubos medium. There was considerable variability in temperature throughout the patties during frying. In addition, the log reduction in MAP numbers showed strong variations. In patties weighing 70 g, considerable bacterial reduction of 4 log or larger could only be achieved after 6 min of cooking. For all other cooking times, the bacterial reduction was less than 2 log. Patties weighing 50 g showed a 5-log or larger reduction after cooking times of 5 and 6 min. To determine the inactivation kinetics, a log-linear regression model was used, showing a constant decrease of MAP numbers over cooking time.

  13. Study on immobilization of marine oil-degrading bacteria by carrier of algae materials.

    PubMed

    Zhang, Yiran; Gao, Wei; Lin, Faxiang; Han, Bin; He, Changfei; Li, Qian; Gao, Xiangxing; Cui, Zhisong; Sun, Chengjun; Zheng, Li

    2018-05-18

    This study investigated the immobilizations with of bacteria two kinds of algal materials, Enteromorpha residue and kelp residue. The lipophilicity of them were compared by diesel absorption rates. The immobilization efficiency of Bacillus sp. E3 was measured to evaluate whether these carriers would satisfy the requirement for biodegradation of oil spills. The bacteria were immobilized through adsorption with the sterilized and non-sterilized carriers to compare the differences between the two treatments. Oil degradation rates were determined using gravimetric and GC-MS methods. Results showed the absorption rates of Enteromorpha residue and kelp residue for diesel were 411 and 273% respectively and remained approximately 105 and 120% after 2 h of erosion in simulated seawater system. After immobilized of Bacillus sp. E3, the oil degradation rates of them were higher than 65% after 21 days biodegradations. GC-MS analysis showed that two immobilizations degraded higher than 70% of the total alkane and the total PAHs, whereas the free bacteria degraded 63% of the total alkane and 66% the total PAHs. And the bacteria immobilized with the carriers degraded more HMW-alkanes and HMW-PAHs than the free bacteria. The bacteria immobilized by non-sterilized kelp residue showed a considerably higher degradation rate than that using sterilized kelp residue. A considerably higher cells absorption rate of immobilization was obtained when using kelp residue, and the preparation of immobilization was low cost and highly efficient. The experiments show the two algae materials, especially the kelp residue, present potential application in bioremediation of marine oil spills.

  14. 5 CFR 302.304 - Order of consideration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... her right of appeal to the Merit Systems Protection Board. (b) Consideration of other candidates... eligible ratings for professional and scientific positions at the GS-9 level and above, or equivalent, in... of the following orders: (i) By preference status. Under this method, preference eligibles having a...

  15. WETLAND ASSESSMENT–THE QUESTIONS NOT BEING ASKED

    EPA Science Inventory

    Most work on wetland assessment has focused on the development of methods. The effective use of a method involves more than having a method. In a 2004 review of rapid assessment methods, Fennessey et al. recommended key areas for consideration when adopting existing methods or ...

  16. A novel propulsion method for high- Tc superconducting maglev vehicle

    NASA Astrophysics Data System (ADS)

    Ma, Guangtong; Wang, Jiasu; Wang, Suyu; Liu, Minxian; Jing, Hua; Lu, Yiyun; Lin, Qunxu

    2008-01-01

    High-Tc superconducting (HTS) maglev is considered as a perfect transportation type because of its unique inherent stability. A direct current (DC) linear motor using the permanent magnet guideway (PMG) as the stator and the on-board coil as the rotor instead of the present inductive or synchronous alternate current (AC) linear motor which has an economic disadvantage due to the necessity to lay primary coil along the guideway is proposed in this paper. In order to modulate the magnetic field under the PMG, an inverse E shape ferromagnetic device (IESFD) core is designed. The possible winding method for the on-board coil is listed, and the analytical result shows that a considerable net ampere force and thus the propulsion force can be generated by this special structure. The influence of the concentrated effect of the IESFD on the maglev performance of HTS bulk is studied by a numerical program, and the results show that the levitation force with the IESFD is 90% of that without. It is also indicated that the load capability and lateral performance of the maglev vehicle combined this propulsion method can be improved thanks to the attractive effect between the IESFD and PMG. The cost of the HTS maglev vehicle will be remarkably reduced and then shorten the distance to practical application with this propulsion method.

  17. A neurally inspired musical instrument classification system based upon the sound onset.

    PubMed

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  18. Anti-inflammatory effect of Naravelia zeylanica DC via suppression of inflammatory mediators in carrageenan-induced abdominal oedema in zebrafish model.

    PubMed

    Ekambaram, Sanmuga Priya; Perumal, Senthamil Selvan; Pavadai, Selvaranjani

    2017-02-01

    The traditional herbal medicines are receiving great importance in the health care sector, especially in Indian system of medicine, i.e, Ayurveda. The present study focused on the standardization of Naravelia zeylanica (L.) DC in terms of its active phytochemicals and to evaluate the anti-inflammatory activity of ethanol extract of N. zeylanica (ENZ). An analytical method was developed by high-performance liquid chromatography for simultaneous determination of β-sitosterol, lupeol and oleanolic acid in ENZ. The cell viability of ENZ was investigated using MTT assay. IC 50 value of ENZ on cell viability was found to be 653.01 µg/mL. To determine the anti-inflammatory activity of ENZ by in vitro method, LPS was added to the macrophage cells to induce activation and ENZ was further added to observe the recovery of inflamed cells. These cells when treated with ENZ, the percentage of viable cells were considerably increased to 74.68%. Loss of mitochondrial membrane potential on treatment with LPS and its recovery by ENZ was studied and found that the number of cells that were damaged on treatment with ENZ + LPS was comparatively lesser than treatment with LPS only. An in vivo anti-inflammatory study was carried out in carrageenan-induced abdominal oedema method in adult zebrafish which revealed the percentage inhibition of inflammation at graded dose levels of ENZ as 23.5% at 100 mg/kg, 62.4% at 200 mg/kg and 87.05% at 350 mg/kg when compared with standard of diclofenac which showed 85% inhibition at 100 mg/kg. The PCR amplification of DNA extracted from adult zebrafish showed that increased concentration of ENZ considerably downregulates the expression of TNF-α and iNOS, the mediators of inflammation.

  19. [Fragmentary osteotomy of maxilla back parts for dentoalveolar lengthening as preparation stage before dental prosthetics making on implants].

    PubMed

    Seniuk, A N; Mokhirev, M A

    2010-01-01

    Conditions for dental implantation are not always ideal that decrease the method possibilities and makes surgeons-implantologists to resort to additional interventions in order to increase the hard and soft tissues volume in the region of the planned implantation. Considerably rare an implantologist comes across with abutment tissues surplus when considerable dentoalveolar lengthening happens with expressed diminution of interalveolar distance. Orthognatic surgery as the method of surgical correction of expressed dentoalveolar lengthening of some teeth group is the most effective when there is no possibility to such deformation elimination by other methods - orthodontic or prosthetic.

  20. Rationale for windshield glass system specification requirements for shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Hayashida, K.; King, G. L.; Tesinsiky, J.; Wittenburg, D. R.

    1972-01-01

    A preliminary procurement specification for the space shuttle orbiter windshield pane, and some of the design considerations and rationale leading to its development are presented. The windshield designer is given the necessary methods and procedures for assuring glass pane structural integrity by proof test. These methods and procedures are fully developed for annealed and thermally tempered aluminosilicate, borosilicate, and soda lime glass and for annealed fused silica. Application of the method to chemically tempered glass is considered. Other considerations are vision requirements, protection against bird impact, hail, frost, rain, and meteoroids. The functional requirements of the windshield system during landing, ferrying, boost, space flight, and entry are included.

  1. Navier-Stokes and viscous-inviscid interaction

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.; Vandalsem, William R.

    1989-01-01

    Some considerations toward developing numerical procedures for simulating viscous compressible flows are discussed. Both Navier-Stokes and boundary layer field methods are considered. Because efficient viscous-inviscid interaction methods have been difficult to extend to complex 3-D flow simulations, Navier-Stokes procedures are more frequently being utilized even though they require considerably more work per grid point. It would seem a mistake, however, not to make use of the more efficient approximate methods in those regions in which they are clearly valid. Ideally, a general purpose compressible flow solver that can optionally take advantage of approximate solution methods would suffice, both to improve accuracy and efficiency. Some potentially useful steps toward this goal are described: a generalized 3-D boundary layer formulation and the fortified Navier-Stokes procedure.

  2. Power Distribution System Planning with GIS Consideration

    NASA Astrophysics Data System (ADS)

    Wattanasophon, Sirichai; Eua-Arporn, Bundhit

    This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.

  3. Challenges and opportunities in bioanalytical support for gene therapy medicinal product development.

    PubMed

    Ma, Mark; Balasubramanian, Nanda; Dodge, Robert; Zhang, Yan

    2017-09-01

    Gene and nucleic acid therapies have demonstrated patient benefits to address unmet medical needs. Beside considerations regarding the biological nature of the gene therapy, the quality of bioanalytical methods plays an important role in ensuring the success of these novel therapies. Inconsistent approaches among bioanalytical labs during preclinical and clinical phases have been observed. There are many underlying reasons for this inconsistency. Various platforms and reagents used in quantitative methods, lacking of detailed regulatory guidance on method validation and uncertainty of immunogenicity strategy in supporting gene therapy may all be influential. This review summarizes recent practices and considerations in bioanalytical support of pharmacokinetics/pharmacodynamics and immunogenicity evaluations in gene therapy development with insight into method design, development and validations.

  4. Ultimate disposal of scrubber wastes

    NASA Technical Reports Server (NTRS)

    Cohenour, B. C.

    1978-01-01

    Part of the initial concern with using the wet scrubbers on the hypergolic propellants was the subsequential disposal of the liquid wastes. To do this, consideration was given to all possible methods to reduce the volume of the wastes and stay within the guidelines established by the state and federal environmental protection agencies. One method that was proposed was the use of water hyacinths in disposal ponds to reduce the waste concentration in the effluent to less than EPA tolerable levels. This method was under consideration and even in use by private industry, municipal governments, and NASA for upgrading existing wastewater treatment facilities to a tertiary system. The use of water hyacinths in disposal ponds appears to be a very cost-effective method for reduction and disposal of hypergolic propellants.

  5. The Cox proportional Hazard model on duration of birth process

    NASA Astrophysics Data System (ADS)

    Wuryandari, Triastuti; Haryatmi Kartiko, Sri; Danardono

    2018-05-01

    The duration of birth process, which is measured from the birth sign until baby born, is one important factor to the whole outcome of delivery process. There is a method of birth process that given relaxing and gentle treatment to the mother caled as gentlebirth. Gentlebirth is a method of birth process that combines brain science, birth science and technology to empower positive birth without pain. However the effect of method to the duration of birth process is still need empirical investigations. Therefore, the objective of this paper is to analyze the duration of birth process using the appropriate statistical methods for durational data, survival data or time to event data. Since there are many variables or factor that may affect the duration, a regression model is considerated. The flexibility of the Cox Proportional Hazard Model in the sense that there is no distributional assumption required, makes the Cox Model as the appropriate model and method to analyze the duration birth process. It is concluded that the Gentlebirth method affects on duration of birth process, with Hazard Ratio of 2.073, showing that the duration of birth process with gentlebirth method is faster than the other method.

  6. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.

  7. Environmental Impact of Ionic Liquids: Automated Evaluation of the Chemical Oxygen Demand of Photochemically Degraded Compounds.

    PubMed

    Costa, Susana P F; Pereira, Sarah A P; Pinto, Paula C A G; Araujo, André R T S; Passos, Marieta L C; Saraiva, M Lúcia M F S

    2017-05-19

    A novel automated fluorimetric technique was developed for the assessment of the chemical oxygen demand (COD) of ionic liquids (ILs) and combined with a photodegradation step to promote IL degradation. The method was implemented on a sequential injection analysis (SIA) system and is based on the reduction of cerium(IV) in the presence of irradiated ILs. Compounds incorporating the chloride anion were found to exhibit higher COD values and 1-butyl-3-methylimidazolium chloride ([bmim] + [Cl] - ), 1-butyl-1-methylpyrrolidinium chloride ([bmpyr] + [Cl] - ), and1-hexyl-3-methylimidazolium chloride ([hmim] + [Cl] - ) also exhibited considerable photodegradability, whereas the cholinium cation and methanesulfonate and tetrafluoroborate anions showed resistance to photolysis. The developed methodology proved to be a simple, affordable, and robust method, showing good repeatability under the tested conditions (rsd <3.5 %, n=10). Therefore, it is expected that the developed approach can be used as a screening method for the preliminary evaluation of compounds' potential impact in the aquatic field. Additionally, the photolysis step presents an attractive option to promote degradation of ILs prior to their release into wastewater. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Study of structural and optical properties of PbS thin films

    NASA Astrophysics Data System (ADS)

    Homraruen, T.; Sudswasd, Y.; Sorod, R.; Kayunkid, N.; Yindeesuk, W.

    2018-03-01

    This research aimed to synthesize lead sulfide (PbS) thin films on glass slides using the successive ion layer absorption and reaction (SILAR) method. We studied the optical properties and structure of PbS thin films by changing the number of dipping cycles and the concentration of precursor solution. The results of this experiment show that different conditions have a considerable influence on the thickness and absorbance of the films. When the number of dipping cycles and the concentration of the solution are increased, film thickness and absorbance tend to become higher. The xrays diffraction pattern showed all the diffraction peaks which confirmed the face center cubic and the structure of PbS had identified. Grain size computation was used to confirm how much these conditions could be affected.

  9. Studies on Five Senses Treatment

    NASA Astrophysics Data System (ADS)

    Sato, Sadaka; Miao, Tiejun; Oyama-Higa, Mayumi

    2011-06-01

    This study proposed a therapy from complementary and alternative medicine to treat mental disorder by through interactions of five senses between therapist and patient. In this method sounding a certain six voices play an important role in healing and recovery. First, we studied effects of speaking using scalp- EEG measurement. Chaos analysis of EEG showed a largely enhanced largest Lyapunov exponent (LLE) during the speaking. In addition, EEG power spectrum showed an increase over most frequencies. Second, we performed case studies on mental disorder using the therapy. Running power spectrum of EEG of patients indicated decreasing power at end of treatment, implying five senses therapy induced relaxed and lowered energy in central neural system. The results agreed with patient's reports that there were considerable decline in anxiety and improvements in mood.

  10. Counselors' Role in Preventing Abuse of Older Adults: Clinical, Ethical, and Legal Considerations

    ERIC Educational Resources Information Center

    Forman, Julia M.; McBride, Rebecca G.

    2010-01-01

    Mistreatment of older adults is commonplace. These individuals are subjected to abuse, financial exploitation, and neglect. The authors present an overview of the literature concerning mistreatment, with an emphasis on clinical, ethical, and legal considerations. Methods are proposed for prevention, including counselor education, advocacy, and…

  11. Child Custody Decisions: Content Analysis of a Judicial Survey.

    ERIC Educational Resources Information Center

    Settle, Shirley A; Lowery, Carol R.

    1982-01-01

    Surveyed judges and trial commissioners (N=80) regarding child custody decisions in divorce. The content analysis described the responents' comments which clarified their reasons for attaching greater or lesser importance to a particular consideration or the method using in assessing a particular consideration during a court proceeding. (JAC)

  12. [The SWOT analysis and strategic considerations for the present medical devices' procurement].

    PubMed

    Li, Bin; He, Meng-qiao; Cao, Jian-wen

    2006-05-01

    In this paper, the SWOT analysis method is used to find out the internal strength, weakness, exterior opportunities and threats of the present medical devices' procurements in hospitals and some strategic considerations are suggested as "one direction, two expansions, three changes and four countermeasures".

  13. Non-woody weed control in pine plantations

    Treesearch

    Phillip M. Dougherty; Bob Lowery

    1986-01-01

    The cost and benefits derived from controlling non-woody competitors in pine planations were reviewed. Cost considerations included both the capital cost and biological cost that may be incurred when weed control treatments are applied. Several methods for reducing the cost of herbicide treatments were explored. Cost reduction considerations included adjustments in...

  14. Statistical considerations in the analysis of data from replicated bioassays

    USDA-ARS?s Scientific Manuscript database

    Multiple-dose bioassay is generally the preferred method for characterizing virulence of insect pathogens. Linear regression of probit mortality on log dose enables estimation of LD50/LC50 and slope, the latter having substantial effect on LD90/95s (doses of considerable interest in pest management)...

  15. Developing and validating a nutrition knowledge questionnaire: key methods and considerations.

    PubMed

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2017-10-01

    To outline key statistical considerations and detailed methodologies for the development and evaluation of a valid and reliable nutrition knowledge questionnaire. Literature on questionnaire development in a range of fields was reviewed and a set of evidence-based guidelines specific to the creation of a nutrition knowledge questionnaire have been developed. The recommendations describe key qualitative methods and statistical considerations, and include relevant examples from previous papers and existing nutrition knowledge questionnaires. Where details have been omitted for the sake of brevity, the reader has been directed to suitable references. We recommend an eight-step methodology for nutrition knowledge questionnaire development as follows: (i) definition of the construct and development of a test plan; (ii) generation of the item pool; (iii) choice of the scoring system and response format; (iv) assessment of content validity; (v) assessment of face validity; (vi) purification of the scale using item analysis, including item characteristics, difficulty and discrimination; (vii) evaluation of the scale including its factor structure and internal reliability, or Rasch analysis, including assessment of dimensionality and internal reliability; and (viii) gathering of data to re-examine the questionnaire's properties, assess temporal stability and confirm construct validity. Several of these methods have previously been overlooked. The measurement of nutrition knowledge is an important consideration for individuals working in the nutrition field. Improved methods in the development of nutrition knowledge questionnaires, such as the use of factor analysis or Rasch analysis, will enable more confidence in reported measures of nutrition knowledge.

  16. Size Estimation of Groups at High Risk of HIV/AIDS using Network Scale Up in Kerman, Iran

    PubMed Central

    Shokoohi, Mostafa; Baneshi, Mohammad Reza; Haghdoost, Ali-Akbar

    2012-01-01

    Objective: To estimate the size of groups at high risk of HIV, Network Scale UP (NSU), an indirect method, was used. Methods: 500 Kermanian male aged 18 to 45 were recruited. 8 groups at high risk of HIV were defined: Users of opium, unknown drug, ecstasy, and alcohol; intra-venous drug users (IDUs; males who have extra-marital sex with females (MSF); male who have sex with female sex workers (MFSW); and male who have sex with other male (MSMs). We asked respondents whether they know anybody (probability method), and if yes, how many people (frequency method) in our target groups. Results: Estimates derived in the probability method were higher than the frequency method. Based on the probability method, 13.7% (95% CI: 11.3%, 16.1%) of males used alcohol at least once in last year; the corresponding percent for opium was 13.1% (95% CI: 10.9%, 15.3%). In addition, 12% has extra-marital sex in last year (95% CI: 10%, 14%); while 7% (95% CI: 5.8%, 8.2%) had sex with a female sex worker. Conclusion: We showed that drug use is more common among young and mid-age males; although their sexual contacts were also considerable. These percentages show that special preventive program is needed to control an HIV transmission. Estimates derived from probability method were comparable with data from external sources. The underestimation in frequency method might be due to the fact that respondents are not aware of sensitive characteristics of all those in their network and underreporting is likely to occur. PMID:22891148

  17. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  18. A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING.

    PubMed

    Bingsheng, He; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming

    2014-07-01

    In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O (1/ t ). A worst-case O (1/ t ) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O (1/ t ) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.

  19. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  20. Information filtering in evolving online networks

    NASA Astrophysics Data System (ADS)

    Chen, Bo-Lun; Li, Fen-Fen; Zhang, Yong-Jun; Ma, Jia-Lin

    2018-02-01

    Recommender systems use the records of users' activities and profiles of both users and products to predict users' preferences in the future. Considerable works towards recommendation algorithms have been published to solve the problems such as accuracy, diversity, congestion, cold-start, novelty, coverage and so on. However, most of these research did not consider the temporal effects of the information included in the users' historical data. For example, the segmentation of the training set and test set was completely random, which was entirely different from the real scenario in recommender systems. More seriously, all the objects are treated as the same, regardless of the new, the popular or obsoleted products, so do the users. These data processing methods always lose useful information and mislead the understanding of the system's state. In this paper, we detailed analyzed the difference of the network structure between the traditional random division method and the temporal division method on two benchmark data sets, Netflix and MovieLens. Then three classical recommendation algorithms, Global Ranking method, Collaborative Filtering and Mass Diffusion method, were employed. The results show that all these algorithms became worse in all four key indicators, ranking score, precision, popularity and diversity, in the temporal scenario. Finally, we design a new recommendation algorithm based on both users' and objects' first appearance time in the system. Experimental results showed that the new algorithm can greatly improve the accuracy and other metrics.

  1. Optimization of diffusion-weighted single-refocused spin-echo EPI by reducing eddy-current artifacts and shortening the echo time.

    PubMed

    Shrestha, Manoj; Hok, Pavel; Nöth, Ulrike; Lienerth, Bianca; Deichmann, Ralf

    2018-03-30

    The purpose of this work was to optimize the acquisition of diffusion-weighted (DW) single-refocused spin-echo (srSE) data without intrinsic eddy-current compensation (ECC) for an improved performance of ECC postprocessing. The rationale is that srSE sequences without ECC may yield shorter echo times (TE) and thus higher signal-to-noise ratios (SNR) than srSE or twice-refocused spin-echo (trSE) schemes with intrinsic ECC. The proposed method employs dummy scans with DW gradients to drive eddy currents into a steady state before data acquisition. Parameters of the ECC postprocessing algorithm were also optimized. Simulations were performed to obtain minimum TE values for the proposed sequence and sequences with intrinsic ECC. Experimentally, the proposed method was compared with standard DW-trSE imaging, both in vitro and in vivo. Simulations showed substantially shorter TE for the proposed method than for methods with intrinsic ECC when using shortened echo readouts. Data of the proposed method showed a marked increase in SNR. A dummy scan duration of at least 1.5 s improved performance of the ECC postprocessing algorithm. Changes proposed for the DW-srSE sequence and for the parameter setting of the postprocessing ECC algorithm considerably reduced eddy-current artifacts and provided a higher SNR.

  2. Method of euthanasia affects amygdala plasticity in horizontal brain slices from mice.

    PubMed

    Kulisch, C; Eckers, N; Albrecht, D

    2011-10-15

    An important consideration in any terminal experiment is the method used for euthanizing animals. Although the prime consideration is that the method is humane, some methods can have a dramatic impact on experimental outcomes. The standard inhalant anesthetic for experiments in brain slices is isoflurane, which replaced the flammable ethers used in the pioneer days of surgery. To our knowledge, there are no data available evaluating the effects of the method of euthanasia on plasticity changes in brain slices. Here, we compare the magnitude of long-term potentiation (LTP) and long-term depression (LTD) in the lateral nucleus of the amygdala (LA) after euthanasia following either ether or isoflurane anesthesia, as well as in mice decapitated without anesthesia. We found no differences in input-output curves using different methods of euthanasia. The LTP magnitude did not differ between ether and normal isoflurane anesthesia. After deep isoflurane anesthesia LTP induced by high frequency stimulation of cortical or intranuclear afferents was significantly reduced compared to ether anesthesia. In contrast to ether anesthesia and decapitation without anesthesia, the low frequency stimulation of cortical afferents induced a reliable LA-LTD after deep isoflurane anesthesia. Low frequency stimulation of intranuclear afferents only caused LTD after pretreatment with ether anesthesia. The results demonstrate that the method of euthanasia can influence brain plasticity for hours at least in the interface chamber. Therefore, the method of euthanasia is an important consideration when brain plasticity will be evaluated. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. REGIONAL ASSESSMENT OF WETLAND ECOLOGICAL CONDITION -- ISSUES AND CHALLENGES

    EPA Science Inventory

    Most work on wetland assessment has focused on the development of methods, however, effective assessment involves more than having a method. In a 2004 review of rapid assessment methods, Fennessey et al. recommended key considerations when adopting existing methods or developing...

  4. Endodontic and Clinical Considerations in the Management of Variable Anatomy in Mandibular Premolars: A Literature Review

    PubMed Central

    Hammo, Mohammad

    2014-01-01

    Mandibular premolars are known to have numerous anatomic variations of their roots and root canals, which are a challenge to treat endodontically. The paper reviews literature to detail the various clinically relevant anatomic considerations with detailed techniques and methods to successfully manage these anomalies. An emphasis and detailed description of every step of treatment including preoperative diagnosis, intraoperative identification and management, and surgical endodontic considerations for the successful management of these complex cases have been included. PMID:24895584

  5. The Effectiveness of Aromatherapy for Depressive Symptoms: A Systematic Review.

    PubMed

    Sánchez-Vidaña, Dalinda Isabel; Ngai, Shirley Pui-Ching; He, Wanjia; Chow, Jason Ka-Wing; Lau, Benson Wui-Man; Tsang, Hector Wing-Hong

    2017-01-01

    Background . Depression is one of the greatest health concerns affecting 350 million people globally. Aromatherapy is a popular CAM intervention chosen by people with depression. Due to the growing popularity of aromatherapy for alleviating depressive symptoms, in-depth evaluation of the evidence-based clinical efficacy of aromatherapy is urgently needed. Purpose . This systematic review aims to provide an analysis of the clinical evidence on the efficacy of aromatherapy for depressive symptoms on any type of patients. Methods . A systematic database search was carried out using predefined search terms in 5 databases: AMED, CINHAL, CCRCT, MEDLINE, and PsycINFO. Outcome measures included scales measuring depressive symptoms levels. Results . Twelve randomized controlled trials were included and two administration methods for the aromatherapy intervention including inhaled aromatherapy (5 studies) and massage aromatherapy (7 studies) were identified. Seven studies showed improvement in depressive symptoms. Limitations . The quality of half of the studies included is low, and the administration protocols among the studies varied considerably. Different assessment tools were also employed among the studies. Conclusions . Aromatherapy showed potential to be used as an effective therapeutic option for the relief of depressive symptoms in a wide variety of subjects. Particularly, aromatherapy massage showed to have more beneficial effects than inhalation aromatherapy.

  6. The Complex Dynamics of Sponsored Search Markets

    NASA Astrophysics Data System (ADS)

    Robu, Valentin; La Poutré, Han; Bohte, Sander

    This paper provides a comprehensive study of the structure and dynamics of online advertising markets, mostly based on techniques from the emergent discipline of complex systems analysis. First, we look at how the display rank of a URL link influences its click frequency, for both sponsored search and organic search. Second, we study the market structure that emerges from these queries, especially the market share distribution of different advertisers. We show that the sponsored search market is highly concentrated, with less than 5% of all advertisers receiving over 2/3 of the clicks in the market. Furthermore, we show that both the number of ad impressions and the number of clicks follow power law distributions of approximately the same coefficient. However, we find this result does not hold when studying the same distribution of clicks per rank position, which shows considerable variance, most likely due to the way advertisers divide their budget on different keywords. Finally, we turn our attention to how such sponsored search data could be used to provide decision support tools for bidding for combinations of keywords. We provide a method to visualize keywords of interest in graphical form, as well as a method to partition these graphs to obtain desirable subsets of search terms.

  7. Final Report - Regulatory Considerations for Adaptive Systems

    NASA Technical Reports Server (NTRS)

    Wilkinson, Chris; Lynch, Jonathan; Bharadwaj, Raj

    2013-01-01

    This report documents the findings of a preliminary research study into new approaches to the software design assurance of adaptive systems. We suggest a methodology to overcome the software validation and verification difficulties posed by the underlying assumption of non-adaptive software in the requirementsbased- testing verification methods in RTCA/DO-178B and C. An analysis of the relevant RTCA/DO-178B and C objectives is presented showing the reasons for the difficulties that arise in showing satisfaction of the objectives and suggested additional means by which they could be satisfied. We suggest that the software design assurance problem for adaptive systems is principally one of developing correct and complete high level requirements and system level constraints that define the necessary system functional and safety properties to assure the safe use of adaptive systems. We show how analytical techniques such as model based design, mathematical modeling and formal or formal-like methods can be used to both validate the high level functional and safety requirements, establish necessary constraints and provide the verification evidence for the satisfaction of requirements and constraints that supplements conventional testing. Finally the report identifies the follow-on research topics needed to implement this methodology.

  8. Quantifying the ozone "weekend effect" at various locations in Phoenix, Arizona

    NASA Astrophysics Data System (ADS)

    Atkinson-Palombo, Carol M.; Miller, James A.; Balling, Robert C.

    Analysis of pollution data from a network of monitors in Maricopa County, Arizona, reveals considerable variation in the magnitude of the ozone "weekend effect" depending on how and where it is measured. We used four separate methods to calculate the weekend effect, all of which showed that the phenomenon is stronger in the urban core, where ozone is produced. Spatial linear regressions show that the magnitude of the weekend effect and the goodness of fit of weekly harmonic cycles in ozone is a function of urbanization, described quantitatively using an index of traffic counts, population, and employment within a 4 km buffer zone of monitoring sites. Analysis of diurnal patterns of ozone as well as oxides of nitrogen (NO x) at a representative site in the urban core supports the hypothesis that lower levels of NO x on Sundays reduce the degree to which ozone is titrated, resulting in a higher minimum and hence mean for that day of the week (DOW). Fringe sites, where ozone concentrations are higher in absolute terms than in the urban core, show almost no "weekend effect," regardless of which of the four individual methods we used. Alternative quantification methods show statistically significant DOW differences in ozone levels in urban fringe locations, albeit out of phase with the weekly cycling of ozone in the urban core. Our findings suggest that multiple metrics need to be used to test for the weekend effect and that the causes of DOW differences in ozone concentrations may be location specific.

  9. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    PubMed

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Systematic exploration of unsupervised methods for mapping behavior

    NASA Astrophysics Data System (ADS)

    Todd, Jeremy G.; Kain, Jamey S.; de Bivort, Benjamin L.

    2017-02-01

    To fully understand the mechanisms giving rise to behavior, we need to be able to precisely measure it. When coupled with large behavioral data sets, unsupervised clustering methods offer the potential of unbiased mapping of behavioral spaces. However, unsupervised techniques to map behavioral spaces are in their infancy, and there have been few systematic considerations of all the methodological options. We compared the performance of seven distinct mapping methods in clustering a wavelet-transformed data set consisting of the x- and y-positions of the six legs of individual flies. Legs were automatically tracked by small pieces of fluorescent dye, while the fly was tethered and walking on an air-suspended ball. We find that there is considerable variation in the performance of these mapping methods, and that better performance is attained when clustering is done in higher dimensional spaces (which are otherwise less preferable because they are hard to visualize). High dimensionality means that some algorithms, including the non-parametric watershed cluster assignment algorithm, cannot be used. We developed an alternative watershed algorithm which can be used in high-dimensional spaces when a probability density estimate can be computed directly. With these tools in hand, we examined the behavioral space of fly leg postural dynamics and locomotion. We find a striking division of behavior into modes involving the fore legs and modes involving the hind legs, with few direct transitions between them. By computing behavioral clusters using the data from all flies simultaneously, we show that this division appears to be common to all flies. We also identify individual-to-individual differences in behavior and behavioral transitions. Lastly, we suggest a computational pipeline that can achieve satisfactory levels of performance without the taxing computational demands of a systematic combinatorial approach.

  11. Extending Vulnerability Assessment to Include Life Stages Considerations

    PubMed Central

    Hodgson, Emma E.; Essington, Timothy E.; Kaplan, Isaac C.

    2016-01-01

    Species are experiencing a suite of novel stressors from anthropogenic activities that have impacts at multiple scales. Vulnerability assessment is one tool to evaluate the likely impacts that these stressors pose to species so that high-vulnerability cases can be identified and prioritized for monitoring, protection, or mitigation. Commonly used semi-quantitative methods lack a framework to explicitly account for differences in exposure to stressors and organism responses across life stages. Here we propose a modification to commonly used spatial vulnerability assessment methods that includes such an approach, using ocean acidification in the California Current as an illustrative case study. Life stage considerations were included by assessing vulnerability of each life stage to ocean acidification and were used to estimate population vulnerability in two ways. We set population vulnerability equal to: (1) the maximum stage vulnerability and (2) a weighted mean across all stages, with weights calculated using Lefkovitch matrix models. Vulnerability was found to vary across life stages for the six species explored in this case study: two krill–Euphausia pacifica and Thysanoessa spinifera, pteropod–Limacina helicina, pink shrimp–Pandalus jordani, Dungeness crab–Metacarcinus magister and Pacific hake–Merluccius productus. The maximum vulnerability estimates ranged from larval to subadult and adult stages with no consistent stage having maximum vulnerability across species. Similarly, integrated vulnerability metrics varied greatly across species. A comparison showed that some species had vulnerabilities that were similar between the two metrics, while other species’ vulnerabilities varied substantially between the two metrics. These differences primarily resulted from cases where the most vulnerable stage had a low relative weight. We compare these methods and explore circumstances where each method may be appropriate. PMID:27416031

  12. Extending Vulnerability Assessment to Include Life Stages Considerations.

    PubMed

    Hodgson, Emma E; Essington, Timothy E; Kaplan, Isaac C

    2016-01-01

    Species are experiencing a suite of novel stressors from anthropogenic activities that have impacts at multiple scales. Vulnerability assessment is one tool to evaluate the likely impacts that these stressors pose to species so that high-vulnerability cases can be identified and prioritized for monitoring, protection, or mitigation. Commonly used semi-quantitative methods lack a framework to explicitly account for differences in exposure to stressors and organism responses across life stages. Here we propose a modification to commonly used spatial vulnerability assessment methods that includes such an approach, using ocean acidification in the California Current as an illustrative case study. Life stage considerations were included by assessing vulnerability of each life stage to ocean acidification and were used to estimate population vulnerability in two ways. We set population vulnerability equal to: (1) the maximum stage vulnerability and (2) a weighted mean across all stages, with weights calculated using Lefkovitch matrix models. Vulnerability was found to vary across life stages for the six species explored in this case study: two krill-Euphausia pacifica and Thysanoessa spinifera, pteropod-Limacina helicina, pink shrimp-Pandalus jordani, Dungeness crab-Metacarcinus magister and Pacific hake-Merluccius productus. The maximum vulnerability estimates ranged from larval to subadult and adult stages with no consistent stage having maximum vulnerability across species. Similarly, integrated vulnerability metrics varied greatly across species. A comparison showed that some species had vulnerabilities that were similar between the two metrics, while other species' vulnerabilities varied substantially between the two metrics. These differences primarily resulted from cases where the most vulnerable stage had a low relative weight. We compare these methods and explore circumstances where each method may be appropriate.

  13. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  14. HLA-DR2-associated DRB1 and DRB5 alleles and haplotypes in Koreans.

    PubMed

    Song, E Y; Kang, S J; Lee, Y J; Park, M H

    2000-09-01

    There are considerable racial differences in the distribution of HLA-DR2-associated DRB1 and DRB5 alleles and the characteristics of linkage disequilibrium between these alleles. In this study, the frequencies of DR2-associated DRB1 and DRB5 alleles and related haplotypes were analyzed in 186 DR2-positive individuals out of 800 normal Koreans registered for unrelated bone marrow donors. HLA class I antigen typing was performed by the serological method and DRB1 and DRB5 genotyping by the PCR-single strand conformational polymorphism method. Only 3 alleles were detected for DR2-associated DRB1 and DRB5 genes, respectively: DRB1(*)1501 (gene frequency 8.0%), (*)1502 (3.2%), (*)1602 (0.9%); DRB5(*)0101 (8.0%), (*)0102 (3.2%), and (*)0202 (0.9%). DRB1-DRB5 haplotype analysis showed an exclusive association between these alleles: DRB1*1501-DRB5*0101 (haplotype frequency 8.0%), DRB1(*)1502-DRB5(*)0102 (3.2%), and DRB1(*)1602-DRB5(*)0202 (0.9%). The 5 most common DR2-associated A-B-DRB1 haplotypes occurring at frequencies of > or = 0.5% were A24-B52-DRB1(*)1502 (1.8%), A2-B62-DRB1(*)1501, A2-B54-DRB1(*)1501, A26-B61-DRB1(*)1501, and A24-B51-DRB1(*)1501. The remarkable homogeneity in the haplotypic associations between DR2-associated DRB1 and DRB5 alleles in Koreans would be advantageous for organ transplantation compared with other ethnic groups showing considerable heterogeneity in the distribution of DRB1-DRB5 haplotypes.

  15. The association between prostatitis and prostate cancer. Systematic review and meta-analysis.

    PubMed

    Perletti, Gianpaolo; Monti, Elena; Magri, Vittorio; Cai, Tommaso; Cleves, Anne; Trinchieri, Alberto; Montanari, Emanuele

    2017-12-31

    The main outcome of this review was the association between a history of clinical chronic prostatitis (NIH category II or III) and a histologically confirmed diagnosis of prostate cancer. Crude odds ratios and 95% confidence intervals (CI) were calculated to analyze dichotomous data. For analysis of pooled data we adopted a random-effects model and the inverse variance weighing method. Heterogeneity was assessed by calculating the I2 value. Out of 2794 screened records, we retrieved 16 full-text articles written in English, reporting the data of 15 case-control studies, involving 422.943 patients. Pooled analysis resulted in a significant crude odds ratio of 1.83 (95% CI: 1.43 to 2.35; P < 0.00001). The total set of data showed considerable heterogeneity (I2 = 91%). Both the Egger's test and the Begg's test for funnel plot asymmetry did not reach statistical significance. The 'trim and fill' method applied to the funnel plot imputed 3 missing studies and the resulting adjusted estimate of the odds ratio was 2.12 (95% CI: 1.38 to 3.22). According to GRADE criteria, the overall quality of the meta-analysis data is low, mainly due to the presence of bias, confounders and extreme effect size outliers. Five among the included studies reported data assessed in 8015 African-American subjects. Pooled analysis resulted in a non-significant crude odds ratio of 1.59 (95% CI: 0.71 to 3.57; P = 0.26), and considerable heterogeneity (I2 = 90%). Meta-analysis of 15 case-control studies shows that a history of clinical chronic prostatitis can significantly increase the odds for prostate cancer in the general population, whereas such association in African-American individuals remains uncertain.

  16. Estimation of biomedical optical properties by simultaneous use of diffuse reflectometry and photothermal radiometry: investigation of light propagation models

    NASA Astrophysics Data System (ADS)

    Fonseca, E. S. R.; de Jesus, M. E. P.

    2007-07-01

    The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.

  17. EPA Method 1615. Measurement of Enterovirus and Norovirus Occurrence in Water by Culture and RT-qPCR. I. Collection of Virus Samples

    PubMed Central

    Fout, G. Shay; Cashdollar, Jennifer L.; Varughese, Eunice A.; Parshionikar, Sandhya U.; Grimm, Ann C.

    2015-01-01

    EPA Method 1615 was developed with a goal of providing a standard method for measuring enteroviruses and noroviruses in environmental and drinking waters. The standardized sampling component of the method concentrates viruses that may be present in water by passage of a minimum specified volume of water through an electropositive cartridge filter. The minimum specified volumes for surface and finished/ground water are 300 L and 1,500 L, respectively. A major method limitation is the tendency for the filters to clog before meeting the sample volume requirement. Studies using two different, but equivalent, cartridge filter options showed that filter clogging was a problem with 10% of the samples with one of the filter types compared to 6% with the other filter type. Clogging tends to increase with turbidity, but cannot be predicted based on turbidity measurements only. From a cost standpoint one of the filter options is preferable over the other, but the water quality and experience with the water system to be sampled should be taken into consideration in making filter selections. PMID:25867928

  18. Machine learning approaches for estimation of prediction interval for the model output.

    PubMed

    Shrestha, Durga L; Solomatine, Dimitri P

    2006-03-01

    A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.

  19. Application of closed-form solutions to a mesh point field in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1985-01-01

    A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.

  20. Crystallization mosaic effect generation by superpixels

    NASA Astrophysics Data System (ADS)

    Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan

    2015-03-01

    Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.

  1. Orbital-Optimized MP3 and MP2.5 with Density-Fitting and Cholesky Decomposition Approximations.

    PubMed

    Bozkaya, Uğur

    2016-03-08

    Efficient implementations of the orbital-optimized MP3 and MP2.5 methods with the density-fitting (DF-OMP3 and DF-OMP2.5) and Cholesky decomposition (CD-OMP3 and CD-OMP2.5) approaches are presented. The DF/CD-OMP3 and DF/CD-OMP2.5 methods are applied to a set of alkanes to compare the computational cost with the conventional orbital-optimized MP3 (OMP3) [Bozkaya J. Chem. Phys. 2011, 135, 224103] and the orbital-optimized MP2.5 (OMP2.5) [Bozkaya and Sherrill J. Chem. Phys. 2014, 141, 204105]. Our results demonstrate that the DF-OMP3 and DF-OMP2.5 methods provide considerably lower computational costs than OMP3 and OMP2.5. Further application results show that the orbital-optimized methods are very helpful for the study of open-shell noncovalent interactions, aromatic bond dissociation energies, and hydrogen transfer reactions. We conclude that the DF-OMP3 and DF-OMP2.5 methods are very promising for molecular systems with challenging electronic structures.

  2. Data Delivery Method Based on Neighbor Nodes' Information in a Mobile Ad Hoc Network

    PubMed Central

    Hayashi, Takuma; Taenaka, Yuzo; Okuda, Takeshi; Yamaguchi, Suguru

    2014-01-01

    This paper proposes a data delivery method based on neighbor nodes' information to achieve reliable communication in a mobile ad hoc network (MANET). In a MANET, it is difficult to deliver data reliably due to instabilities in network topology and wireless network condition which result from node movement. To overcome such unstable communication, opportunistic routing and network coding schemes have lately attracted considerable attention. Although an existing method that employs such schemes, MAC-independent opportunistic routing and encoding (MORE), Chachulski et al. (2007), improves the efficiency of data delivery in an unstable wireless mesh network, it does not address node movement. To efficiently deliver data in a MANET, the method proposed in this paper thus first employs the same opportunistic routing and network coding used in MORE and also uses the location information and transmission probabilities of neighbor nodes to adapt to changeable network topology and wireless network condition. The simulation experiments showed that the proposed method can achieve efficient data delivery with low network load when the movement speed is relatively slow. PMID:24672371

  3. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  4. Multi-body modeling method for rollover using MADYMO

    NASA Astrophysics Data System (ADS)

    Liu, Changye; Lin, Zhigui; Lv, Juncheng; Luo, Qinyue; Qin, Zhenyao; Zhang, Pu; Chen, Tao

    2017-04-01

    Rollovers are complex road accidents causing a big deal of fatalities. FE model for rollover study will costtoo much time due to its long duration.A new multi-body modeling method is proposed in this paper which can save a lot of time and has high-fidelity meanwhile. Following works were carried out to validate this new method. First, a small van was tested following the FMVSS 208 protocol for the validation of the proposed modeling method. Second, a MADYMO model of this small van was reconstructed. The vehicle body was divided into two main parts, the deformable upper body and the rigid lower body, modeled by different waysbased on an FE model. The specific method of modeling is offered in this paper. Finally, the trajectories of the vehicle from test and simulation were comparedand the match was very good. Acceleration of left B pillar was taken into consideration, which turned out fitting the test result well in the time of event. The final deformation status of the vehicle in test and simulation showed similar trend. This validated model provides a reliable wayfor further research in occupant injuries during rollovers.

  5. Data delivery method based on neighbor nodes' information in a mobile ad hoc network.

    PubMed

    Kashihara, Shigeru; Hayashi, Takuma; Taenaka, Yuzo; Okuda, Takeshi; Yamaguchi, Suguru

    2014-01-01

    This paper proposes a data delivery method based on neighbor nodes' information to achieve reliable communication in a mobile ad hoc network (MANET). In a MANET, it is difficult to deliver data reliably due to instabilities in network topology and wireless network condition which result from node movement. To overcome such unstable communication, opportunistic routing and network coding schemes have lately attracted considerable attention. Although an existing method that employs such schemes, MAC-independent opportunistic routing and encoding (MORE), Chachulski et al. (2007), improves the efficiency of data delivery in an unstable wireless mesh network, it does not address node movement. To efficiently deliver data in a MANET, the method proposed in this paper thus first employs the same opportunistic routing and network coding used in MORE and also uses the location information and transmission probabilities of neighbor nodes to adapt to changeable network topology and wireless network condition. The simulation experiments showed that the proposed method can achieve efficient data delivery with low network load when the movement speed is relatively slow.

  6. Silicon solar cell process development, fabrication and analysis

    NASA Technical Reports Server (NTRS)

    Yoo, H. I.; Iles, P. A.; Leung, D. C.

    1981-01-01

    Solar cells were fabricated from EFG ribbons dendritic webs, cast ingots by heat exchanger method, and cast ingots by ubiquitous crystallization process. Baseline and other process variations were applied to fabricate solar cells. EFG ribbons grown in a carbon-containing gas atmosphere showed significant improvement in silicon quality. Baseline solar cells from dendritic webs of various runs indicated that the quality of the webs under investigation was not as good as the conventional CZ silicon, showing an average minority carrier diffusion length of about 60 um versus 120 um of CZ wafers. Detail evaluation of large cast ingots by HEM showed ingot reproducibility problems from run to run and uniformity problems of sheet quality within an ingot. Initial evaluation of the wafers prepared from the cast polycrystalline ingots by UCP suggested that the quality of the wafers from this process is considerably lower than the conventional CZ wafers. Overall performance was relatively uniform, except for a few cells which showed shunting problems caused by inclusions.

  7. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Cannon, A. J.

    2015-12-01

    Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the ranking of this study may be changed when various GCMs are downscaled and evaluated. Nevertheless, it may be informative for end-users (i.e. modelers or water resources managers) to understand and select more suitable downscaling methods corresponding to priorities on regional applications.

  8. The Base 32 Method: An Improved Method for Coding Sibling Constellations.

    ERIC Educational Resources Information Center

    Perfetti, Lawrence J. Carpenter

    1990-01-01

    Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)

  9. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    PubMed Central

    Alguliyev, Rasim M.; Aliguliyev, Ramiz M.; Mahmudova, Rasmiyya S.

    2015-01-01

    Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method. PMID:26516634

  10. Estimation of parameters in rational reaction rates of molecular biological systems via weighted least squares

    NASA Astrophysics Data System (ADS)

    Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke

    2010-01-01

    The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.

  11. Efficient, graph-based white matter connectivity from orientation distribution functions via multi-directional graph propagation

    NASA Astrophysics Data System (ADS)

    Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin

    2011-03-01

    The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.

  12. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  13. Novel ratio difference at coabsorptive point spectrophotometric method for determination of components with wide variation in their absorptivities.

    PubMed

    Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R

    2016-01-05

    Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Guidelines for human epidermal growth factor receptor 2 testing: biologic and methodologic considerations.

    PubMed

    Sauter, Guido; Lee, James; Bartlett, John M S; Slamon, Dennis J; Press, Michael F

    2009-03-10

    The goal of this review is to systematically address a number of issues raised in the American Society of Clinical Oncology-College of American Pathologists (ASCO-CAP) guidelines on testing for the human epidermal growth factor receptor 2 (HER-2) alteration. A group of investigators who are experienced in the conduct and interpretation of HER-2 assay methods reviewed the ASCO-CAP guidelines and address several areas of the HER-2 testing guidelines with a particular emphasis on biologic and methodologic considerations. Although HER-2 status determined by immunohistochemistry (IHC) and the status determined by fluorescent in situ hybridization (FISH) are significantly correlated, we feel that standard considerations of laboratory testing, including test accuracy, reproducibility, and precision, as well as the current data favor FISH over IHC assay methods for determining HER-2 status. These considerations are clearly important in clinical practice because HER2 amplification is directly linked to protein expression levels in breast cancer. However, this protein is not consistently analyzed in formalin-fixed tissues as a result of variability in fixation methods and times and the impact of fixation on HER-2 protein antigenicity. Conversely, gene amplification and FISH are significantly less dependent on tissue fixation methods, making this assay more reproducible between central and peripheral laboratories than IHC. Moreover, review of the existing data demonstrate that FISH is more strongly correlated with responsiveness to either trastuzumab or lapatinib treatment. Until other methods achieve similar test accuracy, reproducibility, and predictive value, we suggest FISH as the primary HER-2 testing modality for women with breast cancer who are candidates for HER-2-targeted therapies.

  15. Agglomeration of Celecoxib by Quasi Emulsion Solvent Diffusion Method: Effect of Stabilizer.

    PubMed

    Maghsoodi, Maryam; Nokhodchi, Ali

    2016-12-01

    Purpose: The quasi-emulsion solvent diffusion (QESD) has evolved into an effective technique to manufacture agglomerates of API crystals. Although, the proposed technique showed benefits, such as cost effectiveness, that is considerably sensitive to the choice of a stabilizer, which agonizes from a absence of systemic understanding in this field. In the present study, the combination of different solvents and stabilizers were compared to investigate any connections between the solvents and stabilizers. Methods: Agglomerates of celecoxib were prepared by QESD method using four different stabilizers (Tween 80, HPMC, PVP and SLS) and three different solvents (methyl acetate, ethyl acetate and isopropyl acetate). The solid state of obtained particles was investigated by differential scanning calorimetry (DSC) and Fourier transform infrared (FT-IR) spectroscopy. The agglomerated were also evaluated in term of production yield, distribution of particles and dissolution behavior. Results: The results showed that the effectiveness of stabilizer in terms of particle size and particle size distribution is specific to each solvent candidate. A stabilizer with a lower HLB value is preferred which actually increased its effectiveness with the solvent candidates with higher lipophilicity. HPMC appeared to be the most versatile stabilizer because it showed a better stabilizing effect compared to other stabilizers in all solvents used. Conclusion: This study demonstrated that the efficiency of stabilizers in forming the celecoxib agglomerates by QESD was influenced by the HLB of the stabilizer and lipophilicity of the solvents.

  16. Progress in performance enhancement methods for capacitive silicon resonators

    NASA Astrophysics Data System (ADS)

    Van Toan, Nguyen; Ono, Takahito

    2017-11-01

    In this paper, we review the progress in recent studies on the performance enhancement methods for capacitive silicon resonators. We provide information on various fabrication technologies and design considerations that can be employed to improve the performance of capacitive silicon resonators, including low motional resistance, small insertion loss, and high quality factor (Q). This paper contains an overview of device structures and working principles, fabrication technologies consisting of hermetic packaging, deep reactive-ion etching and neutral beam etching, and design considerations including mechanically coupled, movable electrode structures and piezoresistive heat engines.

  17. Cold Pad-Batch dyeing method for cotton fabric dyeing with reactive dyes using ultrasonic energy.

    PubMed

    Khatri, Zeeshan; Memon, Muhammad Hanif; Khatri, Awais; Tanwari, Anwaruddin

    2011-11-01

    Reactive dyes are vastly used in dyeing and printing of cotton fibre. These dyes have a distinctive reactive nature due to active groups which form covalent bonds with -OH groups of cotton through substitution and/or addition mechanism. Among many methods used for dyeing cotton with reactive dyes, the Cold Pad Batch (CPB) method is relatively more environment friendly due to high dye fixation and non requirement of thermal energy. The dyed fabric production rate is low due to requirement of at least twelve hours batching time for dye fixation. The proposed CPB method for dyeing cotton involves ultrasonic energy resulting into a one third decrease in batching time. The dyeing of cotton fibre was carried out with CI reactive red 195 and CI reactive black 5 by conventional and ultrasonic (US) method. The study showed that the use of ultrasonic energy not only shortens the batching time but the alkalis concentrations can considerably be reduced. In this case, the colour strength (K/S) and dye fixation (%F) also enhances without any adverse effect on colour fastness of the dyed fabric. The appearance of dyed fibre surface using scanning electron microscope (SEM) showed relative straightening of fibre convolutions and significant swelling of the fibre upon ultrasonic application. The total colour difference values ΔE (CMC) for the proposed method, were found within close proximity to the conventionally dyed sample. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    NASA Astrophysics Data System (ADS)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  19. 26 CFR 1.482-4T - Methods to determine taxable income in connection with a transfer of intangible property...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... such an arrangement, consideration of the principles, methods, comparability, and reliability... method, under this section, as appropriately adjusted in light of the differences in the facts and...

  20. Engine design considerations for 2nd generation supersonic transports

    NASA Technical Reports Server (NTRS)

    Howlett, R. A.

    1975-01-01

    The environmental and economic goals projected for advanced supersonic transports will require revolutionary improvements in propulsion systems. Variable cycle engine concepts that incorporate unique components and advanced technologies show promise in meeting these goals. Pratt & Whitney Aircraft is conducting conceptual design studies of variable cycle engine concepts under NASA sponsorship. This paper reviews some of the design considerations for these engine concepts. Emphasis is placed on jet noise abatement, reduction of emissions, performance improvements, installation considerations, hot-section characteristics and control system requirements. Two representative variable cycle engine concepts that incorporate these basic design considerations are described.

  1. A Mixed Methods Study of Participant Reaction to Domestic Violence Research in Jordan

    ERIC Educational Resources Information Center

    Clark, Cari Jo; Shahrouri, Manal; Halasa, Louma; Khalaf, Inaam; Spencer, Rachael; Everson-Rose, Susan

    2012-01-01

    Research on domestic violence against women has increased considerably over the past few decades. Most participants in such studies find the exercise worthwhile and of greater benefit than emotional cost; however, systematic examination of participant reaction to research on violence is considerably lacking, especially in the Middle East region.…

  2. The potential influence of rain on airfoil performance

    NASA Technical Reports Server (NTRS)

    Dunham, R. Earl, Jr.

    1987-01-01

    The potential influence of heavy rain on airfoil performance is discussed. Experimental methods for evaluating rain effects are reviewed. Important scaling considerations for extrapolating model data are presented. It is shown that considerable additional effort, both analytical and experimental, is necessary to understand the degree of hazard associated with flight operations in rain.

  3. Investigating Cognitive Effort and Response Quality of Question Formats in Web Surveys Using Paradata

    ERIC Educational Resources Information Center

    Höhne, Jan Karem; Schlosser, Stephan; Krebs, Dagmar

    2017-01-01

    Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions,…

  4. Realization of the Energy Saving of the Environmental Examination Device Temperature Control System in Consideration of Temperature Characteristics

    NASA Astrophysics Data System (ADS)

    Onogaki, Hitoshi; Yokoyama, Shuichi

    The temperature control of the environmental examination device has loss of the energy consumption to cool it while warming it. This paper proposed a tempareture control system method with energy saving for the enviromental examination device without using cooling in consideration of temperature characteristics.

  5. Service Providers' Perceptions of Active Ageing among Older Adults with Lifelong Intellectual Disabilities

    ERIC Educational Resources Information Center

    Buys, L.; Aird, R.; Miller, E.

    2012-01-01

    Background: Considerable attention is currently being directed towards both active ageing and the revising of standards for disability services within Australia and internationally. Yet, to date, no consideration appears to have been given to ways to promote active ageing among older adults with intellectual disabilities (IDs). Methods:…

  6. Food-packaging interactions influencing quality and safety.

    PubMed

    Hotchkiss, J H

    1997-01-01

    Interactions between foods and packaging can be detrimental to quality and/or safety. Changes in product flavour due to aroma sorption and the transfer of undesirable flavours from packaging to foods are important mechanisms of deterioration when foods are packaged in polymer-based materials. Careful consideration must be given to those factors affecting such interactions when selecting packaging materials in order to maximize product quality, safety, and shelf-life while minimizing undesirable changes. Product considerations include sensitivity to flavour and related deteriorations, colour changes, vitamin loss, microbial activity, and amount of flavour available. Storage considerations include temperature, time, and processing method. Polymer considerations include type of polymer and processing method, volume or mass of polymer to product ratio, and whether the interaction is Fickian or non-Fickian. Methodology to determine the extent of such interactions must be developed. Direct interactions between food and packaging are not necessarily detrimental. The same principles governing undesirable interactions can be used to affect desirable outcomes. Examples include films which directly intercept or absorb oxygen, inhibit microorganisms, remove undesirable flavours by sorption, or indicate safety and product shelf-life.

  7. Clinical application of antidepressant pharmacogenetics: considerations for the design of future studies.

    PubMed

    Fabbri, Chiara; Serretti, Alessandro

    2018-06-12

    A frustrating inertia has affected the development of clinical applications of antidepressant pharmacogenetics and personalized treatments of depression are still lacking 20 years after the first findings. Candidate gene studies provided replicated findings for some polymorphisms, but each of them shows at best a small effect on antidepressant efficacy and the cumulative effect of different polymorphisms is unclear. Further, no candidate was immune by at least some negative studies. These considerations give rise to some concerns about the clinical benefits of currently available pharmacogenetic tests since they are based on the results of candidate gene studies. Clinical guidelines in fact suggest that only polymorphisms that alter cytochrome 2D6 or 2C19 enzymatic activity probably provide useful clinical indications, while variants in genes involved in antidepressant pharmacodynamics have no recommended clinical applications. The present review discusses possible strategies to facilitate the identification of genetic biomarkers with clinical usefulness in guiding antidepressant treatments. These include analysis methods for the study of the polygenic/omnigenic nature of antidepressant response, the prioritization of polymorphisms on the basis of functional considerations, the incorporation of clinical-demographic predictors in pharmacogenetic studies (e.g. mixed polygenic and clinical risk scores), the application of methodological improvements to the design of future studies in order to maximize the comparability of results and improve power. Copyright © 2018. Published by Elsevier B.V.

  8. Antioxidant activity, polyphenolic contents and essential oil composition of Pimpinella anisum L. as affected by zinc fertilizer.

    PubMed

    Tavallali, Vahid; Rahmati, Sadegh; Bahmanzadegan, Atefeh

    2017-11-01

    The antioxidant activity and essential oil content of plants may vary considerably with respect to environmental conditions, especially nutrient availability. Among micronutrients, zinc (Zn) is needed by plants in only small amounts but is crucial to plant development. This study aimed to evaluate the effects of Zn fertilization on the antioxidant activity, polyphenolic contents and essential oil composition of Pimpinella anisum fruit. Foliar application of Zn fertilizer considerably increased the number of detected essential oil components from 27 to 45. Zinc application at a rate of 0.2% (w/v) significantly enhanced the levels of β-bisabolene, germacrene D, n-decane and α-zingiberene, whereas the opposite trend was observed for (E)-anethole and geijerene. Application of 0.2% Zn considerably increased the levels of phenolic compounds, with chlorogenic acid showing the highest content among eight phenolic compounds detected in treated plants. The maximum antioxidant activity was achieved through application of 0.2% Zn fertilizer. These findings indicated that the quality and quantity of anise fruit essential oil components were significantly altered by application of low levels of Zn. After foliar application of Zn, polyphenolic contents as well as antioxidant activity of anise fruit increased. Using Zn fertilizer is an efficient method to improve the pharmaceutical and food properties of anise fruit. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  9. Commentary: Writing and Evaluating Qualitative Research Reports

    PubMed Central

    Thompson, Deborah; Aroian, Karen J.; McQuaid, Elizabeth L.; Deatrick, Janet A.

    2016-01-01

    Objective To provide an overview of qualitative methods, particularly for reviewers and authors who may be less familiar with qualitative research. Methods A question and answer format is used to address considerations for writing and evaluating qualitative research. Results and Conclusions When producing qualitative research, individuals are encouraged to address the qualitative research considerations raised and to explicitly identify the systematic strategies used to ensure rigor in study design and methods, analysis, and presentation of findings. Increasing capacity for review and publication of qualitative research within pediatric psychology will advance the field’s ability to gain a better understanding of the specific needs of pediatric populations, tailor interventions more effectively, and promote optimal health. PMID:27118271

  10. Preshaping command inputs to reduce telerobotic system oscillations

    NASA Technical Reports Server (NTRS)

    Singer, Neil C.; Seering, Warren P.

    1989-01-01

    The results of using a new technique for shaping inputs to a model of the space shuttle Remote Manipulator System (RMS) are presented. The shapes inputs move the system to the same location that was originally commanded, however, the oscillations of the machine are considerably reduced. An overview of the new shaping method is presented. A description of RMS model is provided. The problem of slow joint servo rates on the RMS is accommodated with an extension of the shaping method. The results and sample data are also presented for both joint and three-dimensional cartesian motions. The results demonstrate that the new shaping method performs well on large, telerobotic systems which exhibit significant structural vibration. The new method is shown to also result in considerable energy savings during operations of the RMS manipulator.

  11. Dual-scale Galerkin methods for Darcy flow

    NASA Astrophysics Data System (ADS)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  12. Conditional analysis of mixed Poisson processes with baseline counts: implications for trial design and analysis.

    PubMed

    Cook, Richard J; Wei, Wei

    2003-07-01

    The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).

  13. Analysis of Generator Oscillation Characteristics Based on Multiple Synchronized Phasor Measurements

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Takuhei; Yoshimoto, Masamichi; Mitani, Yasunori; Saeki, Osamu; Tsuji, Kiichiro

    In recent years, there has been considerable interest in the on-line measurement, such as observation of power system dynamics and evaluation of machine parameters. On-line methods are particularly attractive since the machine’s service need not be interrupted and parameter estimation is performed by processing measurements obtained during the normal operation of the machine. Authors placed PMU (Phasor Measurement Unit) connected to 100V outlets in some Universities in the 60Hz power system and examine oscillation characteristics in power system. PMU is synchronized based on the global positioning system (GPS) and measured data are transmitted via Internet. This paper describes an application of PMU for generator oscillation analysis. The purpose of this paper is to show methods for processing phase difference and to estimate damping coeffcient and natural angular frequency from phase difference at steady state.

  14. Considerations on methodological challenges for water footprint calculations.

    PubMed

    Thaler, S; Zessner, M; De Lis, F Bertran; Kreuzinger, N; Fehringer, R

    2012-01-01

    We have investigated how different approaches for water footprint (WF) calculations lead to different results, taking sugar beet production and sugar refining as examples. To a large extent, results obtained from any WF calculation are reflective of the method used and the assumptions made. Real irrigation data for 59 European sugar beet growing areas showed inadequate estimation of irrigation water when a widely used simple approach was used. The method resulted in an overestimation of blue water and an underestimation of green water usage. Dependent on the chosen (available) water quality standard, the final grey WF can differ up to a factor of 10 and more. We conclude that further development and standardisation of the WF is needed to reach comparable and reliable results. A special focus should be on standardisation of the grey WF methodology based on receiving water quality standards.

  15. An approach for estimating the magnetization direction of magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Zhang, Yingtang; Yin, Gang; Fan, Hongbo; Li, Zhining

    2017-02-01

    An approach for estimating the magnetization direction of magnetic anomalies in the presence of remanent magnetization through correlation between normalized source strength (NSS) and reduced-to-the-pole (RTP) is proposed. The observation region was divided into several calculation areas and the RTP field was transformed using different assumed values of the magnetization directions. Following this, the cross-correlation between NSS and RTP field was calculated, and it was found that the correct magnetization direction was that corresponding to the maximum cross-correlation value. The approach was tested on both simulated and real magnetic data. The results showed that the approach was effective in a variety of situations and considerably reduced the effect of remanent magnetization. Thus, the method using NSS and RTP is more effective compared to other methods such as using the total magnitude anomaly and RTP.

  16. Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk

    PubMed Central

    Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo

    2011-01-01

    Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966

  17. Separating and recycling metals from mixed metallic particles of crushed electronic wastes by vacuum metallurgy.

    PubMed

    Zhan, Lu; Xu, Zhenming

    2009-09-15

    During the treatment of electronic wastes, a crushing process is usually used to strip metals from various base plates. Several methods have been applied to separate metals from nonmetals. However, mixed metallic particles obtained from these processes are still a mixture of various metals, including some toxic heavy metals such as lead and cadmium. With emphasis on recovering copper and other precious metals, there have hitherto been no satisfactory methods to recover these toxic metals. In this paper, the criterion of separating metals from mixed metallic particles by vacuum metallurgy is built. The results show that the metals with high vapor pressure have been almost recovered completely, leading to a considerable reduction of environmental pollution. In addition, the purity of copper in mixed particles has been improved from about 80 wt % to over 98 wt %.

  18. A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905

  19. A novel harmony search algorithm based on teaching-learning strategies for 0-1 knapsack problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.

  20. Computer-aided classification of optical images for diagnosis of osteoarthritis in the finger joints.

    PubMed

    Zhang, Jiang; Wang, James Z; Yuan, Zhen; Sobel, Eric S; Jiang, Huabei

    2011-01-01

    This study presents a computer-aided classification method to distinguish osteoarthritis finger joints from healthy ones based on the functional images captured by x-ray guided diffuse optical tomography. Three imaging features, joint space width, optical absorption, and scattering coefficients, are employed to train a Least Squares Support Vector Machine (LS-SVM) classifier for osteoarthritis classification. The 10-fold validation results show that all osteoarthritis joints are clearly identified and all healthy joints are ruled out by the LS-SVM classifier. The best sensitivity, specificity, and overall accuracy of the classification by experienced technicians based on manual calculation of optical properties and visual examination of optical images are only 85%, 93%, and 90%, respectively. Therefore, our LS-SVM based computer-aided classification is a considerably improved method for osteoarthritis diagnosis.

Top