2009-06-01
3. Previous Navy CRM Assessments ....................................................24 4. Applying Kirkpatrick’s Topology of Evaluation...development within each aviation community. Kirkpatrick’s (1976) hierarchy of training evaluation technique was applied to examine three levels of... Applying methods and techniques used in previous CRM evaluation research, this thesis provided an updated evaluation of the Naval CRM program to fill
The Effects of Translanguaging on the Bi-Literate Inferencing Strategies of Fourth Grade Learners
ERIC Educational Resources Information Center
Mgijima, Vukile Desmond; Makalela, Leketi
2016-01-01
Previous research suggests that enhanced cognitive and metacognitive skills are achieved when translanguaging techniques are applied in a multilingual classroom. This paper presents findings on the effects of translanguaging techniques on teaching grade 4 learners how to apply relevant background knowledge when drawing inferences during reading.…
Top down, bottom up structured programming and program structuring
NASA Technical Reports Server (NTRS)
Hamilton, M.; Zeldin, S.
1972-01-01
New design and programming techniques for shuttle software. Based on previous Apollo experience, recommendations are made to apply top-down structured programming techniques to shuttle software. New software verification techniques for large software systems are recommended. HAL, the higher order language selected for the shuttle flight code, is discussed and found to be adequate for implementing these techniques. Recommendations are made to apply the workable combination of top-down, bottom-up methods in the management of shuttle software. Program structuring is discussed relevant to both programming and management techniques.
Direct measurement of carbon-14 in carbon dioxide by liquid scintillation counting
NASA Technical Reports Server (NTRS)
Horrocks, D. L.
1969-01-01
Liquid scintillation counting technique is applied to the direct measurement of carbon-14 in carbon dioxide. This method has high counting efficiency and eliminates many of the basic problems encountered with previous techniques. The technique can be used to achieve a percent substitution reaction and is of interest as an analytical technique.
Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation
NASA Astrophysics Data System (ADS)
Sleesongsom, S.; Bureerat, S.
2018-03-01
This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.
Cognitive Support in Teaching Football Techniques
ERIC Educational Resources Information Center
Duda, Henryk
2009-01-01
Study aim: To improve the teaching of football techniques by applying cognitive and imagery techniques. Material and methods: Four groups of subjects, n = 32 each, were studied: male and female physical education students aged 20-21 years, not engaged previously in football training; male juniors and minors, aged 16 and 13 years, respectively,…
3D temporal subtraction on multislice CT images using nonlinear warping technique
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio
2007-03-01
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.
NASA reliability preferred practices for design and test
NASA Technical Reports Server (NTRS)
1991-01-01
Given here is a manual that was produced to communicate within the aerospace community design practices that have contributed to NASA mission success. The information represents the best technical advice that NASA has to offer on reliability design and test practices. Topics covered include reliability practices, including design criteria, test procedures, and analytical techniques that have been applied to previous space flight programs; and reliability guidelines, including techniques currently applied to space flight projects, where sufficient information exists to certify that the technique will contribute to mission success.
Sensor Data Qualification Technique Applied to Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Simon, Donald L.
2013-01-01
This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.
Liu, Yu; Jiang, Lanlan; Song, Yongchen; Zhao, Yuechao; Zhang, Yi; Wang, Dayong
2016-02-01
Minimum miscible pressure (MMP) of gas and oil system is a key parameter for the injection system design of CO2 miscible flooding. Some industrial standard approaches such as the experiment using a rising bubble apparatus (RBA), the slim tube tests (STT), the pressure-density diagram (PDD), etc. have been applied for decades to determine the MMP of gas and oil. Some theoretical or experiential calculations of the MMP were also applied to the gas-oil miscible system. In the present work, an improved technique based on our previous research for the estimation of the MMP by using magnetic resonance imaging (MRI) was proposed. This technique was then applied to the CO2 and n-alkane binary and ternary systems to observe the mixing procedure and to study the miscibility. MRI signal intensities, which represent the proton concentration of n-alkane in both the hydrocarbon rich phase and the CO2 rich phase, were plotted as a reference for determining the MMP. The accuracy of the MMP obtained by using this improved technique was enhanced comparing with the data obtained from our previous works. The results also show good agreement with other established techniques (such as the STT) in previous published works. It demonstrates increases of MMPs as the temperature rise from 20 °C to 37.8 °C. The MMPs of CO2 and n-alkane systems are also found to be proportional to the carbon number in the range of C10 to C14. Copyright © 2015 Elsevier Inc. All rights reserved.
Ferreira, F J O; Crispim, V R; Silva, A X
2010-06-01
In this study the development of a methodology to detect illicit drugs and plastic explosives is described with the objective of being applied in the realm of public security. For this end, non-destructive assay with neutrons was used and the technique applied was the real time neutron radiography together with computerized tomography. The system is endowed with automatic responses based upon the application of an artificial intelligence technique. In previous tests using real samples, the system proved capable of identifying 97% of the inspected materials. Copyright 2010 Elsevier Ltd. All rights reserved.
Pulsed-field-gradient measurements of time-dependent gas diffusion
NASA Technical Reports Server (NTRS)
Mair, R. W.; Cory, D. G.; Peled, S.; Tseng, C. H.; Patz, S.; Walsworth, R. L.
1998-01-01
Pulsed-field-gradient NMR techniques are demonstrated for measurements of time-dependent gas diffusion. The standard PGSE technique and variants, applied to a free gas mixture of thermally polarized xenon and O2, are found to provide a reproducible measure of the xenon diffusion coefficient (5.71 x 10(-6) m2 s-1 for 1 atm of pure xenon), in excellent agreement with previous, non-NMR measurements. The utility of pulsed-field-gradient NMR techniques is demonstrated by the first measurement of time-dependent (i.e., restricted) gas diffusion inside a porous medium (a random pack of glass beads), with results that agree well with theory. Two modified NMR pulse sequences derived from the PGSE technique (named the Pulsed Gradient Echo, or PGE, and the Pulsed Gradient Multiple Spin Echo, or PGMSE) are also applied to measurements of time dependent diffusion of laser polarized xenon gas, with results in good agreement with previous measurements on thermally polarized gas. The PGMSE technique is found to be superior to the PGE method, and to standard PGSE techniques and variants, for efficiently measuring laser polarized noble gas diffusion over a wide range of diffusion times. Copyright 1998 Academic Press.
NASA Astrophysics Data System (ADS)
Vidya Sagar, R.; Raghu Prasad, B. K.
2012-03-01
This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.
Updated generalized biomass equations for North American tree species
David C. Chojnacky; Linda S. Heath; Jennifer C. Jenkins
2014-01-01
Historically, tree biomass at large scales has been estimated by applying dimensional analysis techniques and field measurements such as diameter at breast height (dbh) in allometric regression equations. Equations often have been developed using differing methods and applied only to certain species or isolated areas. We previously had compiled and combined (in meta-...
Pinpointing chiral structures with front-back polarized neutron reflectometry.
O'Donovan, K V; Borchers, J A; Majkrzak, C F; Hellwig, O; Fullerton, E E
2002-02-11
A new development in spin-polarized neutron reflectometry enables us to more fully characterize the nucleation and growth of buried domain walls in layered magnetic materials. We applied this technique to a thin-film exchange-spring magnet. After first measuring the reflectivity with the neutrons striking the front, we measure with the neutrons striking the back. Simultaneous fits are sensitive to the presence of spiral spin structures. The technique reveals previously unresolved features of field-dependent domain walls in exchange-spring systems and has sufficient generality to apply to a variety of magnetic systems.
Modular multiaperatures for light sensors
NASA Technical Reports Server (NTRS)
Rizzo, A. A.
1977-01-01
Process involves electroplating multiaperature masks as unit, eliminating alinement and assembly difficulties previously encountered. Technique may be applied to masks in automated and surveillance light systems, when precise, wide angle field of view is needed.
NASA Technical Reports Server (NTRS)
Ostroff, A. J.
1973-01-01
Some of the major difficulties associated with large orbiting astronomical telescopes are the cost of manufacturing the primary mirror to precise tolerances and the maintaining of diffraction-limited tolerances while in orbit. One successfully demonstrated approach for minimizing these problem areas is the technique of actively deforming the primary mirror by applying discrete forces to the rear of the mirror. A modal control technique, as applied to active optics, has previously been developed and analyzed. The modal control technique represents the plant to be controlled in terms of its eigenvalues and eigenfunctions which are estimated via numerical approximation techniques. The report includes an extension of previous work using the modal control technique and also describes an optimal feedback controller. The equations for both control laws are developed in state-space differential form and include such considerations as stability, controllability, and observability. These equations are general and allow the incorporation of various mode-analyzer designs; two design approaches are presented. The report also includes a technique for placing actuator and sensor locations at points on the mirror based upon the flexibility matrix of the uncontrolled or unobserved modes of the structure. The locations selected by this technique are used in the computer runs which are described. The results are based upon three different initial error distributions, two mode-analyzer designs, and both the modal and optimal control laws.
Applied Computational Electromagnetics Society Journal. Volume 7, Number 1, Summer 1992
1992-01-01
previously-solved computational problem in electrical engineering, physics, or related fields of study. The technical activities promoted by this...in solution technique or in data input/output; identification of new applica- tions for electromagnetics modeling codes and techniques; integration of...papers will represent the computational electromagnetics aspects of research in electrical engineering, physics, or related disciplines. However, papers
NASA Astrophysics Data System (ADS)
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
The Use of a Context-Based Information Retrieval Technique
2009-07-01
provided in context. Latent Semantic Analysis (LSA) is a statistical technique for inferring contextual and structural information, and previous studies...WAIS). 10 DSTO-TR-2322 1.4.4 Latent Semantic Analysis LSA, which is also known as latent semantic indexing (LSI), uses a statistical and...1.4.6 Language Models In contrast, natural language models apply algorithms that combine statistical information with semantic information. Semantic
ERIC Educational Resources Information Center
Carrasco, Robert L.
The case study of the use of a classroom observation technique to evaluate the abilities and performance of a bilingual kindergarten student previously assessed as a low achiever is described. There are three objectives: to show the validity of the ethnographic monitoring technique, to show the value of teachers as collaborating researchers, and…
A novel analytical technique suitable for the identification of plastics.
Nečemer, Marijan; Kump, Peter; Sket, Primož; Plavec, Janez; Grdadolnik, Jože; Zvanut, Maja
2013-01-01
The enormous development and production of plastic materials in the last century resulted in increasing numbers of such kinds of objects. Development of a simple and fast technique to classify different types of plastics could be used in many activities dealing with plastic materials such as packaging of food, sorting of used plastic materials, and also, if technique would be non-destructive, for conservation of plastic artifacts in museum collections, a relatively new field of interest since 1990. In our previous paper we introduced a non-destructive technique for fast identification of unknown plastics based on EDXRF spectrometry,1 using as a case study some plastic artifacts archived in the Museum in order to show the advantages of the nondestructive identification of plastic material. In order to validate our technique it was necessary to apply for this purpose the comparison of analyses with some of the analytical techniques, which are more suitable and so far rather widely applied in identifying some most common sorts of plastic materials.
Research on Upgrading Structures for Host and Risk Area Shelters
1982-09-01
both "as-built" and upgraded configurations. These analysis and prediction techniques have been applied to floors and roofs constructed of many...scale program and were previously applied to full-scale wood floor tests (Ref. 2). TEST ELEMENTS AND PROCEDURES Three tests were conducted on 8-inch...weights. A 14,000-lb crane counterweight was used for the preload, applying a load of 7,000 lb to each one-third point on the plank. The drop weight was
How accurately can we estimate energetic costs in a marine top predator, the king penguin?
Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J
2007-01-01
King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.
Application of response surface techniques to helicopter rotor blade optimization procedure
NASA Technical Reports Server (NTRS)
Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.
1995-01-01
In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.
Image-Subtraction Photometry of Variable Stars in the Globular Clusters NGC 6388 and NGC 6441
NASA Technical Reports Server (NTRS)
Corwin, Michael T.; Sumerel, Andrew N.; Pritzl, Barton J.; Smith, Horace A.; Catelan, M.; Sweigart, Allen V.; Stetson, Peter B.
2006-01-01
We have applied Alard's image subtraction method (ISIS v2.1) to the observations of the globular clusters NGC 6388 and NGC 6441 previously analyzed using standard photometric techniques (DAOPHOT, ALLFRAME). In this reanalysis of observations obtained at CTIO, besides recovering the variables previously detected on the basis of our ground-based images, we have also been able to recover most of the RR Lyrae variables previously detected only in the analysis of Hubble Space Telescope WFPC2 observations of the inner region of NGC 6441. In addition, we report five possible new variables not found in the analysis of the EST observations of NGC 6441. This dramatically illustrates the capabilities of image subtraction techniques applied to ground-based data to recover variables in extremely crowded fields. We have also detected twelve new variables and six possible variables in NGC 6388 not found in our previous groundbased studies. Revised mean periods for RRab stars in NGC 6388 and NGC 6441 are 0.676 day and 0.756 day, respectively. These values are among the largest known for any galactic globular cluster. Additional probable type II Cepheids were identified in NGC 6388, confirming its status as a metal-rich globular cluster rich in Cepheids.
Software Aids for radiologists: Part 1, Useful Photoshop skills.
Gross, Joel A; Thapa, Mahesh M
2012-12-01
The purpose of this review is to describe the use of several essential techniques and tools in Adobe Photoshop image-editing software. The techniques shown expand on those previously described in the radiologic literature. Radiologists, especially those with minimal experience with image-editing software, can quickly apply a few essential Photoshop tools to minimize the frustration that can result from attempting to navigate a complex user interface.
Access to destinations : annual accessibility measure for the Twin Cities Metropolitan Region.
DOT National Transportation Integrated Search
2012-11-01
This report summarizes previous phases of the Access to Destinations project and applies the techniques developed : over the course of the project to conduct an evaluation of accessibility in the Twin Cities metropolitan region for : 2010. It describ...
OGLE II Eclipsing Binaries In The LMC: Analysis With Class
NASA Astrophysics Data System (ADS)
Devinney, Edward J.; Prsa, A.; Guinan, E. F.; DeGeorge, M.
2011-01-01
The Eclipsing Binaries (EBs) via Artificial Intelligence (EBAI) Project is applying machine learning techniques to elucidate the nature of EBs. Previously, Prsa, et al. applied artificial neural networks (ANNs) trained on physically-realistic Wilson-Devinney models to solve the light curves of the 1882 detached EBs in the LMC discovered by the OGLE II Project (Wyrzykowski, et al.) fully automatically, bypassing the need for manually-derived starting solutions. A curious result is the non-monotonic distribution of the temperature ratio parameter T2/T1, featuring a subsidiary peak noted previously by Mazeh, et al. in an independent analysis using the EBOP EB solution code (Tamuz, et al.). To explore this and to gain a fuller understanding of the multivariate EBAI LMC observational plus solutions data, we have employed automatic clustering and advanced visualization (CAV) techniques. Clustering the OGLE II data aggregates objects that are similar with respect to many parameter dimensions. Measures of similarity for example, could include the multidimensional Euclidean Distance between data objects, although other measures may be appropriate. Applying clustering, we find good evidence that the T2/T1 subsidiary peak is due to evolved binaries, in support of Mazeh et al.'s speculation. Further, clustering suggests that the LMC detached EBs occupying the main sequence region belong to two distinct classes. Also identified as a separate cluster in the multivariate data are stars having a Period-I band relation. Derekas et al. had previously found a Period-K band relation for LMC EBs discovered by the MACHO Project (Alcock, et al.). We suggest such CAV techniques will prove increasingly useful for understanding the large, multivariate datasets increasingly being produced in astronomy. We are grateful for the support of this research from NSF/RUI Grant AST-05-75042 f.
NASA Technical Reports Server (NTRS)
Mellstrom, J. A.; Smyth, P.
1991-01-01
The results of applying pattern recognition techniques to diagnose fault conditions in the pointing system of one of the Deep Space network's large antennas, the DSS 13 34-meter structure, are discussed. A previous article described an experiment whereby a neural network technique was used to identify fault classes by using data obtained from a simulation model of the Deep Space Network (DSN) 70-meter antenna system. Described here is the extension of these classification techniques to the analysis of real data from the field. The general architecture and philosophy of an autonomous monitoring paradigm is described and classification results are discussed and analyzed in this context. Key features of this approach include a probabilistic time-varying context model, the effective integration of signal processing and system identification techniques with pattern recognition algorithms, and the ability to calibrate the system given limited amounts of training data. Reported here are recognition accuracies in the 97 to 98 percent range for the particular fault classes included in the experiments.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian
2015-06-01
In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.
VEG: An intelligent workbench for analysing spectral reflectance data
NASA Technical Reports Server (NTRS)
Harrison, P. Ann; Harrison, Patrick R.; Kimes, Daniel S.
1994-01-01
An Intelligent Workbench (VEG) was developed for the systematic study of remotely sensed optical data from vegetation. A goal of the remote sensing community is to infer the physical and biological properties of vegetation cover (e.g. cover type, hemispherical reflectance, ground cover, leaf area index, biomass, and photosynthetic capacity) using directional spectral data. VEG collects together, in a common format, techniques previously available from many different sources in a variety of formats. The decision as to when a particular technique should be applied is nonalgorithmic and requires expert knowledge. VEG has codified this expert knowledge into a rule-based decision component for determining which technique to use. VEG provides a comprehensive interface that makes applying the techniques simple and aids a researcher in developing and testing new techniques. VEG also provides a classification algorithm that can learn new classes of surface features. The learning system uses the database of historical cover types to learn class descriptions of one or more classes of cover types.
Analysis of distortion data from TF30-P-3 mixed compression inlet test
NASA Technical Reports Server (NTRS)
King, R. W.; Schuerman, J. A.; Muller, R. G.
1976-01-01
A program was conducted to reduce and analyze inlet and engine data obtained during testing of a TF30-P-3 engine operating behind a mixed compression inlet. Previously developed distortion analysis techniques were applied to the data to assist in the development of a new distortion methodology. Instantaneous distortion techniques were refined as part of the distortion methodology development. A technique for estimating maximum levels of instantaneous distortion from steady state and average turbulence data was also developed as part of the program.
Advanced Feedback Methods in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1985-01-01
In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
[Pterygium surgery and fibrin glue: avoiding dehiscence].
Pérez-Silguero, D; Díaz-Ginory, A; Santana-Rodríguez, C; Pérez-Silguero, M A
2014-01-01
The purpose of the study is to evaluate those cases of pterygium surgery with fibrin sealant that produced dehiscence of the graft, and then apply and evaluate the efficacy of a different surgical technique in an attempt eliminate this complication in previously identified cases of high risk. The first phase is a retrospective study of 42 cases of pterygium surgery. In the second phase, the variation in the surgical technique was prospectively used in 14 cases of pterygium surgery. Cases of recurrent pterygium, broad pterygium, and complicated surgery were identified as the groups with a risk of suffering dehiscence of the graft. With the variant applied surgery no dehiscence occurred when using the variation in surgical technique, with no added complications. Copyright © 2012 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.
Group decision-making techniques for natural resource management applications
Coughlan, Beth A.K.; Armour, Carl L.
1992-01-01
This report is an introduction to decision analysis and problem-solving techniques for professionals in natural resource management. Although these managers are often called upon to make complex decisions, their training in the natural sciences seldom provides exposure to the decision-making tools developed in management science. Our purpose is to being to fill this gap. We present a general analysis of the pitfalls of group problem solving, and suggestions for improved interactions followed by the specific techniques. Selected techniques are illustrated. The material is easy to understand and apply without previous training or excessive study and is applicable to natural resource management issues.
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.
NMR studies of multiphase flows II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altobelli, S.A.; Caprihan, A.; Fukushima, E.
NMR techniques for measurements of spatial distribution of material phase, velocity and velocity fluctuation are being developed and refined. Versions of these techniques which provide time average liquid fraction and fluid phase velocity have been applied to several concentrated suspension systems which will not be discussed extensively here. Technical developments required to further extend the use of NMR to the multi-phase flow arena and to provide measurements of previously unobtainable parameters are the focus of this report.
The efficacy of the 'mind map' study technique.
Farrand, Paul; Hussain, Fearzana; Hennessy, Enid
2002-05-01
To examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information. To obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken. Barts and the London School of Medicine and Dentistry, University of London. 50 second- and third-year medical students. Recall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%). Mind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.
The Long-Term Sustainability of Different Item Response Theory Scaling Methods
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert R.
2011-01-01
This article investigates the accuracy of examinee classification into performance categories and the estimation of the theta parameter for several item response theory (IRT) scaling techniques when applied to six administrations of a test. Previous research has investigated only two administrations; however, many testing programs equate tests…
Camacho, Morgana; Pessanha, Thaíla; Leles, Daniela; Dutra, Juliana MF; Silva, Rosângela; de Souza, Sheila Mendonça; Araujo, Adauto
2013-01-01
Parasite findings in sambaquis (shell mounds) are scarce. Although the 121 shell mound samples were previously analysed in our laboratory, we only recently obtained the first positive results. In the sambaqui of Guapi, Rio de Janeiro, Brazil, paleoparasitological analysis was performed on sediment samples collected from various archaeological layers, including the superficial layer as a control. Eggs of Acanthocephala, Ascaridoidea and Heterakoidea were found in the archaeological layers. We applied various techniques and concluded that Lutz's spontaneous sedimentation technique is effective for concentrating parasite eggs in sambaqui soil for microscopic analysis. PMID:23579793
Camacho, Morgana; Pessanha, Thaíla; Leles, Daniela; Dutra, Juliana M F; Silva, Rosângela; Souza, Sheila Mendonça de; Araujo, Adauto
2013-04-01
Parasite findings in sambaquis (shell mounds) are scarce. Although the 121 shell mound samples were previously analysed in our laboratory, we only recently obtained the first positive results. In the sambaqui of Guapi, Rio de Janeiro, Brazil, paleoparasitological analysis was performed on sediment samples collected from various archaeological layers, including the superficial layer as a control. Eggs of Acanthocephala, Ascaridoidea and Heterakoidea were found in the archaeological layers. We applied various techniques and concluded that Lutz's spontaneous sedimentation technique is effective for concentrating parasite eggs in sambaqui soil for microscopic analysis.
Applications of Deep Learning and Reinforcement Learning to Biological Data.
Mahmud, Mufti; Kaiser, Mohammed Shamim; Hussain, Amir; Vassanelli, Stefano
2018-06-01
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Local regression type methods applied to the study of geophysics and high frequency financial data
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Percolation analysis of nonlinear structures in scale-free two-dimensional simulations
NASA Technical Reports Server (NTRS)
Dominik, Kurt G.; Shandarin, Sergei F.
1992-01-01
Results are presented of applying percolation analysis to several two-dimensional N-body models which simulate the formation of large-scale structure. Three parameters are estimated: total area (a(c)), total mass (M(C)), and percolation density (rho(c)) of the percolating structure at the percolation threshold for both unsmoothed and smoothed (with different scales L(s)) nonlinear with filamentary structures, confirming early speculations that this type of model has several features of filamentary-type distributions. Also, it is shown that, by properly applying smoothing techniques, many problems previously considered detrimental can be dealt with and overcome. Possible difficulties and prospects with the use of this method are discussed, specifically relating to techniques and methods already applied to CfA deep sky surveys. The success of this test in two dimensions and the potential for extrapolation to three dimensions is also discussed.
Advances in 6d diffraction contrast tomography
NASA Astrophysics Data System (ADS)
Viganò, N.; Ludwig, W.
2018-04-01
The ability to measure 3D orientation fields and to determine grain boundary character plays a key role in understanding many material science processes, including: crack formation and propagation, grain coarsening, and corrosion processes. X-ray diffraction imaging techniques offer the ability to retrieve such information in a non-destructive manner. Among them, Diffraction Contrast Tomography (DCT) is a monochromatic beam, near-field technique, that uses an extended beam and offers fast mapping of 3D sample volumes. It was previously shown that the six-dimensional extension of DCT can be applied to moderately deformed samples (<= 5% total strain), made from materials that exhibit low levels of elastic deformation of the unit cell (<= 1%). In this article, we improved over the previously proposed 6D-DCT reconstruction method, through the introduction of both a more advanced forward model and reconstruction algorithm. The results obtained with the proposed improvements are compared against the reconstructions previously published in [1], using Electron Backscatter Diffraction (EBSD) measurements as a reference. The result was a noticeably higher quality reconstruction of the grain boundary positions and local orientation fields. The achieved reconstruction quality, together with the low acquisition times, render DCT a valuable tool for the stop-motion study of polycrystalline microstructures, evolving as a function of applied strain or thermal annealing treatments, for selected materials.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
NASA Astrophysics Data System (ADS)
Kempema, Nathan J.; Ma, Bin; Long, Marshall B.
2016-09-01
Soot optical properties are essential to the noninvasive study of the in-flame evolution of soot particles since they allow quantitative interpretation of optical diagnostics. Such experimental data are critical for comparison to results from computational models and soot sub-models. In this study, the thermophoretic sampling particle diagnostic (TSPD) technique is applied along with data from a previous spectrally resolved line-of-sight light attenuation experiment to determine the soot volume fraction and absorption function. The TSPD technique is applied in a flame stabilized on the Yale burner, and the soot scattering-to-absorption ratio is calculated using the Rayleigh-Debye-Gans theory for fractal aggregates and morphology information from a previous sampling experiment. The soot absorption function is determined as a function of wavelength and found to be in excellent agreement with previous in-flame measurements of the soot absorption function in coflow laminar diffusion flames. Two-dimensional maps of the soot dispersion exponent are calculated and show that the soot absorption function may have a positive or negative exponential wavelength dependence depending on the in-flame location. Finally, the wavelength dependence of the soot absorption function is related to the ratio of soot absorption functions, as would be found using two-excitation-wavelength laser-induced incandescence.
Optimization of the tungsten oxide technique for measurement of atmospheric ammonia
NASA Technical Reports Server (NTRS)
Brown, Kenneth G.
1987-01-01
Hollow tubes coated with tungstic acid have been shown to be of value in the determination of ammonia and nitric acid in ambient air. Practical application of this technique was demonstrated utilizing an automated sampling system for in-flight collection and analysis of atmospheric samples. Due to time constraints these previous measurements were performed on tubes that had not been well characterized in the laboratory. As a result the experimental precision could not be accurately estimated. Since the technique was being compared to other techniques for measuring these compounds, it became necessary to perform laboratory tests which would establish the reliability of the technique. This report is a summary of these laboratory experiments as they are applied to the determination of ambient ammonia concentration.
Composeable Chat over Low-Bandwidth Intermittent Communication Links
2007-04-01
Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression
Applying the Mixed Rasch Model to the Runco Ideational Behavior Scale
ERIC Educational Resources Information Center
Sen, Sedat
2016-01-01
Previous research using creativity assessments has used latent class models and identified multiple classes (a 3-class solution) associated with various domains. This study explored the latent class structure of the Runco Ideational Behavior Scale, which was designed to quantify ideational capacity. A robust state-of the-art technique called the…
NASA Astrophysics Data System (ADS)
Zulfakriza, Z.; Saygin, E.; Cummins, P. R.; Widiyantoro, S.; Nugraha, A. D.; Lühr, B.-G.; Bodin, T.
2014-04-01
Delineating the crustal structure of central Java is crucial for understanding its complex tectonic setting. However, seismic imaging of the strong heterogeneity typical of such a tectonically active region can be challenging, particularly in the upper crust where velocity contrasts are strongest and steep body wave ray paths provide poor resolution. To overcome these difficulties, we apply the technique of ambient noise tomography (ANT) to data collected during the Merapi Amphibious Experiment (MERAMEX), which covered central Java with a temporary deployment of over 120 seismometers during 2004 May-October. More than 5000 Rayleigh wave Green's functions were extracted by cross-correlating the noise simultaneously recorded at available station pairs. We applied a fully non-linear 2-D Bayesian probabilistic inversion technique to the retrieved traveltimes. Features in the derived tomographic images correlate well with previous studies, and some shallow structures that were not evident in previous studies are clearly imaged with ANT. The Kendeng Basin and several active volcanoes appear with very low group velocities, and anomalies with relatively high velocities can be interpreted in terms of crustal sutures and/or surface geological features.
Girard, Romuald; Zeineddine, Hussein A; Orsbon, Courtney; Tan, Huan; Moore, Thomas; Hobson, Nick; Shenkar, Robert; Lightle, Rhonda; Shi, Changbin; Fam, Maged D; Cao, Ying; Shen, Le; Neander, April I; Rorrer, Autumn; Gallione, Carol; Tang, Alan T; Kahn, Mark L; Marchuk, Douglas A; Luo, Zhe-Xi; Awad, Issam A
2016-09-15
Cerebral cavernous malformations (CCMs) are hemorrhagic brain lesions, where murine models allow major mechanistic discoveries, ushering genetic manipulations and preclinical assessment of therapies. Histology for lesion counting and morphometry is essential yet tedious and time consuming. We herein describe the application and validations of X-ray micro-computed tomography (micro-CT), a non-destructive technique allowing three-dimensional CCM lesion count and volumetric measurements, in transgenic murine brains. We hereby describe a new contrast soaking technique not previously applied to murine models of CCM disease. Volumetric segmentation and image processing paradigm allowed for histologic correlations and quantitative validations not previously reported with the micro-CT technique in brain vascular disease. Twenty-two hyper-dense areas on micro-CT images, identified as CCM lesions, were matched by histology. The inter-rater reliability analysis showed strong consistency in the CCM lesion identification and staging (K=0.89, p<0.0001) between the two techniques. Micro-CT revealed a 29% greater CCM lesion detection efficiency, and 80% improved time efficiency. Serial integrated lesional area by histology showed a strong positive correlation with micro-CT estimated volume (r(2)=0.84, p<0.0001). Micro-CT allows high throughput assessment of lesion count and volume in pre-clinical murine models of CCM. This approach complements histology with improved accuracy and efficiency, and can be applied for lesion burden assessment in other brain diseases. Copyright © 2016 Elsevier B.V. All rights reserved.
Innovation and behavioral flexibility in wild redfronted lemurs (Eulemur rufifrons).
Huebner, Franziska; Fichtel, Claudia
2015-05-01
Innovations and problem-solving abilities can provide animals with important ecological advantages as they allow individuals to deal with novel social and ecological challenges. Innovation is a solution to a novel problem or a novel solution to an old problem, with the latter being especially difficult. Finding a new solution to an old problem requires individuals to inhibit previously applied solutions to invent new strategies and to behave flexibly. We examined the role of experience on cognitive flexibility to innovate and to find new problem-solving solutions with an artificial feeding task in wild redfronted lemurs (Eulemur rufifrons). Four groups of lemurs were tested with feeding boxes, each offering three different techniques to extract food, with only one technique being available at a time. After the subjects learned a technique, this solution was no longer successful and subjects had to invent a new technique. For the first transition between task 1 and 2, subjects had to rely on their experience of the previous technique to solve task 2. For the second transition, subjects had to inhibit the previously learned technique to learn the new task 3. Tasks 1 and 2 were solved by most subjects, whereas task 3 was solved by only a few subjects. In this task, besides behavioral flexibility, especially persistence, i.e., constant trying, was important for individual success during innovation. Thus, wild strepsirrhine primates are able to innovate flexibly, suggesting a general ecological relevance of behavioral flexibility and persistence during innovation and problem solving across all primates.
Colorimetry Technique for Scalable Characterization of Suspended Graphene.
Cartamil-Bueno, Santiago J; Steeneken, Peter G; Centeno, Alba; Zurutuza, Amaia; van der Zant, Herre S J; Houri, Samer
2016-11-09
Previous statistical studies on the mechanical properties of chemical-vapor-deposited (CVD) suspended graphene membranes have been performed by means of measuring individual devices or with techniques that affect the material. Here, we present a colorimetry technique as a parallel, noninvasive, and affordable way of characterizing suspended graphene devices. We exploit Newton's rings interference patterns to study the deformation of a double-layer graphene drum 13.2 μm in diameter when a pressure step is applied. By studying the time evolution of the deformation, we find that filling the drum cavity with air is 2-5 times slower than when it is purged.
Optical digital chaos cryptography
NASA Astrophysics Data System (ADS)
Arenas-Pingarrón, Álvaro; González-Marcos, Ana P.; Rivas-Moscoso, José M.; Martín-Pereda, José A.
2007-10-01
In this work we present a new way to mask the data in a one-user communication system when direct sequence - code division multiple access (DS-CDMA) techniques are used. The code is generated by a digital chaotic generator, originally proposed by us and previously reported for a chaos cryptographic system. It is demonstrated that if the user's data signal is encoded with a bipolar phase-shift keying (BPSK) technique, usual in DS-CDMA, it can be easily recovered from a time-frequency domain representation. To avoid this situation, a new system is presented in which a previous dispersive stage is applied to the data signal. A time-frequency domain analysis is performed, and the devices required at the transmitter and receiver end, both user-independent, are presented for the optical domain.
Rodgers, Kiri J.; Hursthouse, Andrew; Cuthbert, Simon
2015-01-01
As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes. PMID:26393631
Rodgers, Kiri J; Hursthouse, Andrew; Cuthbert, Simon
2015-09-18
As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes.
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
Multiclass Bayes error estimation by a feature space sampling technique
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.
1979-01-01
A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.
NASA Astrophysics Data System (ADS)
Kleshnin, Mikhail; Orlova, Anna; Kirillin, Mikhail; Golubiatnikov, German; Turchin, Ilya
2017-07-01
A new approach to optical measuring blood oxygen saturation was developed and implemented. This technique is based on an original three-stage algorithm for reconstructing the relative concentration of biological chromophores (hemoglobin, water, lipids) from the measured spectra of diffusely scattered light at different distances from the probing radiation source. The numerical experiments and approbation of the proposed technique on a biological phantom have shown the high reconstruction accuracy and the possibility of correct calculation of hemoglobin oxygenation in the presence of additive noise and calibration errors. The obtained results of animal studies have agreed with the previously published results of other research groups and demonstrated the possibility to apply the developed technique to monitor oxygen saturation in tumor tissue.
NASA Astrophysics Data System (ADS)
Gao, J.; Nishida, K.
2010-10-01
This paper describes an Ultraviolet-Visible Laser Absorption-Scattering (UV-Vis LAS) imaging technique applied to asymmetric fuel sprays. Continuing from the previous studies, the detailed measurement principle was derived. It is demonstrated that, by means of this technique, cumulative masses and mass distributions of vapor/liquid phases can be quantitatively measured no matter what shape the spray is. A systematic uncertainty analysis was performed, and the measurement accuracy was also verified through a series of experiments on the completely vaporized fuel spray. The results show that the Molar Absorption Coefficient (MAC) of the test fuel, which is typically pressure and temperature dependent, is the major error source. The measurement error in the vapor determination has been shown to be approximately 18% under the assumption of constant MAC of the test fuel. Two application examples of the extended LAS technique were presented for exploring the dynamics and physical insight of the evaporating fuel sprays: diesel sprays injected by group-hole nozzles and gasoline sprays impinging on an inclined wall.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
Proving Stabilization of Biological Systems
NASA Astrophysics Data System (ADS)
Cook, Byron; Fisher, Jasmin; Krepska, Elzbieta; Piterman, Nir
We describe an efficient procedure for proving stabilization of biological systems modeled as qualitative networks or genetic regulatory networks. For scalability, our procedure uses modular proof techniques, where state-space exploration is applied only locally to small pieces of the system rather than the entire system as a whole. Our procedure exploits the observation that, in practice, the form of modular proofs can be restricted to a very limited set. For completeness, our technique falls back on a non-compositional counterexample search. Using our new procedure, we have solved a number of challenging published examples, including: a 3-D model of the mammalian epidermis; a model of metabolic networks operating in type-2 diabetes; a model of fate determination of vulval precursor cells in the C. elegans worm; and a model of pair-rule regulation during segmentation in the Drosophila embryo. Our results show many orders of magnitude speedup in cases where previous stabilization proving techniques were known to succeed, and new results in cases where tools had previously failed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, Kenneth Todd; Minier, Leanna M. G.; Celina, Mathias C.
Chemiluminescence (CL) has been applied as a condition monitoring technique to assess aging related changes in a hydroxyl-terminated-polybutadiene based polyurethane elastomer. Initial thermal aging of this polymer was conducted between 110 and 50 C. Two CL methods were applied to examine the degradative changes that had occurred in these aged samples: isothermal 'wear-out' experiments under oxygen yielding initial CL intensity and 'wear-out' time data, and temperature ramp experiments under inert conditions as a measure of previously accumulated hydroperoxides or other reactive species. The sensitivities of these CL features to prior aging exposure of the polymer were evaluated on the basismore » of qualifying this method as a quick screening technique for quantification of degradation levels. Both the techniques yielded data representing the aging trends in this material via correlation with mechanical property changes. Initial CL rates from the isothermal experiments are the most sensitive and suitable approach for documenting material changes during the early part of thermal aging.« less
Constraint-based integration of planning and scheduling for space-based observatory management
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven F.
1994-01-01
Progress toward the development of effective, practical solutions to space-based observatory scheduling problems within the HSTS scheduling framework is reported. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) short-term observation scheduling problem. The work was motivated by the limitations of the current solution and, more generally, by the insufficiency of classical planning and scheduling approaches in this problem context. HSTS has subsequently been used to develop improved heuristic solution techniques in related scheduling domains and is currently being applied to develop a scheduling tool for the upcoming Submillimeter Wave Astronomy Satellite (SWAS) mission. The salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research are summarized. Then, some key problem decomposition techniques underlying the integrated planning and scheduling approach to the HST problem are described; research results indicate that these techniques provide leverage in solving space-based observatory scheduling problems. Finally, more recently developed constraint-posting scheduling procedures and the current SWAS application focus are summarized.
Weinschenk, Stefan; Hollmann, Markus W; Strowitzki, Thomas
2016-04-01
Pudendal nerve injection is used as a diagnostic procedure in the vulvar region and for therapeutic purposes, such as in vulvodynia. Here, we provide a new, easy-to-perform perineal injection technique. We analyzed 105 perineal injections into the pudendal nerve with a local anesthetic (LA), procaine in 20 patients. A 0.4 × 40 mm needle was handled using a stop-and-go technique while monitoring the patient's discomfort. The needle was placed 1-2 cm laterally to the dorsal introitus. After aspiration, a small amount of LA was applied. After subcutaneous anesthesia, the needle was further advanced step-by-step. Thus, 5 ml could be applied with little discomfort to the patient. Anesthesia in the pudendal target region was the primary endpoint of our analysis. In 93 of 105 injections (88.6 %), complete perineal anesthesia was achieved with a single injection. 12 injections were repeated. These injections were excluded from the analysis. Severity of injection pain, on visual analog scale (VAS) from 0 to 100, was 26.8 (95 % CI 7.2-46.4). Age (β = 0.33, p < 0.01) and the number of previous injections (β = 0.35, p < 0.01) inversely correlated with injection pain. Injection pain and anesthesia were not affected by BMI, the number and the side of previous injections, or order of injection. A reversible vasovagal reaction was common, but no serious adverse effects occurred. Perineal pudendal injection is an effective and safe technique for anesthesia in diagnostic (vulva biopsy) and therapeutic indications (pudendal neuralgia), and regional anesthesia in perinatal settings.
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
Transition and turbulence measurements in hypersonic flows
NASA Technical Reports Server (NTRS)
Owen, F. K.
1990-01-01
This paper reviews techniques for transitional- and turbulent-flow measurements and describes current research in support of turbulence modeling. Special attention is given to the potential of applying hot wire and laser velocimeter to measuring turbulent fluctuations in hypersonic flow fields. The results of recent experiments conducted in two hypersonic wind tunnels are presented and compared with previous hot-wire turbulence measurements.
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
The Crustal Structure of the Central Anatolia (Turkey) Using Receiver Functions
NASA Astrophysics Data System (ADS)
Yelkenci, S.; Benoit, M.; Kuleli, H.; Gurbuz, C.
2005-12-01
Central Anatolia lies in a transitional region between the extensional tectonics of western Anatolia and the complex transpressional tectonics of Eastern Anatolia, and has a complicated thermal and structural history. Few studies of the crustal structure of Anatolia have been performed, however, studies of the crustal structure of Eastern Anatolia showed that crustal thicknesses were thinner than previously thought. To further investigate the crustal structure in Central Anatolia, we present results from receiver function analysis using new data from broad-band instruments. The stations were equipped with 7 broadband three-component STS-2 and 13 short period three-component S-13 sensors. These stations operated for period of one and half months between the October and November, 2002, and yielded data for ~ 40 high quality receiver functions. Additionally, receiver functions were also computed using data from permanent stations MALT, ISP, and ANTO. We applied the hk-stacking technique of Zhu and Kanamori (2000) to receiver functions to obtain the crustal thickness and Vp/Vs ratios. Furthermore, we applied a waveform modeling technique to investigate mid-crustal discontinuties previously imaged in the region. Our results compare well with refraction-based crustal thicknesses in overlapped areas.
Magnetic Analysis Techniques Applied to Desert Varnish
NASA Technical Reports Server (NTRS)
Schmidgall, E. R.; Moskowitz, B. M.; Dahlberg, E. D.; Kuhlman, K. R.
2003-01-01
Desert varnish is a black or reddish coating commonly found on rock samples from arid regions. Typically, the coating is very thin, less than half a millimeter thick. Previous research has shown that the primary components of desert varnish are silicon oxide clay minerals (60%), manganese and iron oxides (20-30%), and trace amounts of other compounds [1]. Desert varnish is thought to originate when windborne particles containing iron and manganese oxides are deposited onto rock surfaces where manganese oxidizing bacteria concentrate the manganese and form the varnish [4,5]. If desert varnish is indeed biogenic, then the presence of desert varnish on rock surfaces could serve as a biomarker, indicating the presence of microorganisms. This idea has considerable appeal, especially for Martian exploration [6]. Magnetic analysis techniques have not been extensively applied to desert varnish. The only previous magnetic study reported that based on room temperature demagnetization experiments, there were noticeable differences in magnetic properties between a sample of desert varnish and the substrate sandstone [7]. Based upon the results of the demagnetization experiments, the authors concluded that the primary magnetic component of desert varnish was either magnetite (Fe3O4) or maghemite ( Fe2O3).
Application of Six Sigma towards improving surgical outcomes.
Shukla, P J; Barreto, S G; Nadkarni, M S
2008-01-01
Six Sigma is a 'process excellence' tool targeting continuous improvement achieved by providing a methodology for improving key steps of a process. It is ripe for application into health care since almost all health care processes require a near-zero tolerance for mistakes. The aim of this study is to apply the Six Sigma methodology into a clinical surgical process and to assess the improvement (if any) in the outcomes and patient care. The guiding principles of Six Sigma, namely DMAIC (Define, Measure, Analyze, Improve, Control), were used to analyze the impact of double stapling technique (DST) towards improving sphincter preservation rates for rectal cancer. The analysis using the Six Sigma methodology revealed a Sigma score of 2.10 in relation to successful sphincter preservation. This score demonstrates an improvement over the previous technique (73% over previous 54%). This study represents one of the first clinical applications of Six Sigma in the surgical field. By understanding, accepting, and applying the principles of Six Sigma, we have an opportunity to transfer a very successful management philosophy to facilitate the identification of key steps that can improve outcomes and ultimately patient safety and the quality of surgical care provided.
Flight test experience with high-alpha control system techniques on the F-14 airplane
NASA Technical Reports Server (NTRS)
Gera, J.; Wilson, R. J.; Enevoldson, E. K.; Nguyen, L. T.
1981-01-01
Improved handling qualities of fighter aircraft at high angles of attack can be provided by various stability and control augmentation techniques. NASA and the U.S. Navy are conducting a joint flight demonstration of these techniques on an F-14 airplane. This paper reports on the flight test experience with a newly designed lateral-directional control system which suppresses such high angle of attack handling qualities problems as roll reversal, wing rock, and directional divergence while simultaneously improving departure/spin resistance. The technique of integrating a piloted simulation into the flight program was used extensively in this program. This technique had not been applied previously to high angle of attack testing and required the development of a valid model to simulate the test airplane at extremely high angles of attack.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Parser Combinators: a Practical Application for Generating Parsers for NMR Data
Fenwick, Matthew; Weatherby, Gerard; Ellis, Heidi JC; Gryk, Michael R.
2013-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy is a technique for acquiring protein data at atomic resolution and determining the three-dimensional structure of large protein molecules. A typical structure determination process results in the deposition of a large data sets to the BMRB (Bio-Magnetic Resonance Data Bank). This data is stored and shared in a file format called NMR-Star. This format is syntactically and semantically complex making it challenging to parse. Nevertheless, parsing these files is crucial to applying the vast amounts of biological information stored in NMR-Star files, allowing researchers to harness the results of previous studies to direct and validate future work. One powerful approach for parsing files is to apply a Backus-Naur Form (BNF) grammar, which is a high-level model of a file format. Translation of the grammatical model to an executable parser may be automatically accomplished. This paper will show how we applied a model BNF grammar of the NMR-Star format to create a free, open-source parser, using a method that originated in the functional programming world known as “parser combinators”. This paper demonstrates the effectiveness of a principled approach to file specification and parsing. This paper also builds upon our previous work [1], in that 1) it applies concepts from Functional Programming (which is relevant even though the implementation language, Java, is more mainstream than Functional Programming), and 2) all work and accomplishments from this project will be made available under standard open source licenses to provide the community with the opportunity to learn from our techniques and methods. PMID:24352525
Maeda, Kiminori; Storey, Jonathan G; Liddell, Paul A; Gust, Devens; Hore, P J; Wedge, C J; Timmel, Christiane R
2015-02-07
We present a study of a carotenoid-porphyrin-fullerene triad previously shown to function as a chemical compass: the photogenerated carotenoid-fullerene radical pair recombines at a rate sensitive to the orientation of an applied magnetic field. To characterize the system we develop a time-resolved Low-Frequency Reaction Yield Detected Magnetic Resonance (tr-LF-RYDMR) technique; the effect of varying the relative orientation of applied static and 36 MHz oscillating magnetic fields is shown to be strongly dependent on the strength of the oscillating magnetic field. RYDMR is a diagnostic test for involvement of the radical pair mechanism in the magnetic field sensitivity of reaction rates or yields, and has previously been applied in animal behavioural experiments to verify the involvement of radical-pair-based intermediates in the magnetic compass sense of migratory birds. The spectroscopic selection rules governing RYDMR are well understood at microwave frequencies for which the so-called 'high-field approximation' is valid, but at lower frequencies different models are required. For example, the breakdown of the rotating frame approximation has recently been investigated, but less attention has so far been given to orientation effects. Here we gain physical insights into the interplay of the different magnetic interactions affecting low-frequency RYDMR experiments performed in the challenging regime in which static and oscillating applied magnetic fields as well as internal electron-nuclear hyperfine interactions are of comparable magnitude. Our observations aid the interpretation of existing RYDMR-based animal behavioural studies and will inform future applications of the technique to verify and characterize further the biological receptors involved in avian magnetoreception.
NASA Technical Reports Server (NTRS)
1974-01-01
This report presents the derivation, description, and operating instructions for a computer program (TEKVAL) which measures the economic value of advanced technology features applied to long range commercial passenger aircraft. The program consists of three modules; and airplane sizing routine, a direct operating cost routine, and an airline return-on-investment routine. These modules are linked such that they may be operated sequentially or individually, with one routine generating the input for the next or with the option of externally specifying the input for either of the economic routines. A very simple airplane sizing technique was previously developed, based on the Brequet range equation. For this program, that sizing technique has been greatly expanded and combined with the formerly separate DOC and ROI programs to produce TEKVAL.
Soft magnetic tweezers: a proof of principle.
Mosconi, Francesco; Allemand, Jean François; Croquette, Vincent
2011-03-01
We present here the principle of soft magnetic tweezers which improve the traditional magnetic tweezers allowing the simultaneous application and measurement of an arbitrary torque to a deoxyribonucleic acid (DNA) molecule. They take advantage of a nonlinear coupling regime that appears when a fast rotating magnetic field is applied to a superparamagnetic bead immersed in a viscous fluid. In this work, we present the development of the technique and we compare it with other techniques capable of measuring the torque applied to the DNA molecule. In this proof of principle, we use standard electromagnets to achieve our experiments. Despite technical difficulties related to the present implementation of these electromagnets, the agreement of measurements with previous experiments is remarkable. Finally, we propose a simple way to modify the experimental design of electromagnets that should bring the performances of the device to a competitive level.
An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes
NASA Technical Reports Server (NTRS)
Lagerloef, Gary S. E.
1994-01-01
A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
NASA Astrophysics Data System (ADS)
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
Development of a model for predicting NASA/MSFC program success
NASA Technical Reports Server (NTRS)
Riggs, Jeffrey; Miller, Tracy; Finley, Rosemary
1990-01-01
Research conducted during the execution of a previous contract (NAS8-36955/0039) firmly established the feasibility of developing a tool to aid decision makers in predicting the potential success of proposed projects. The final report from that investigation contains an outline of the method to be applied in developing this Project Success Predictor Model. As a follow-on to the previous study, this report describes in detail the development of this model and includes full explanation of the data-gathering techniques used to poll expert opinion. The report includes the presentation of the model code itself.
Detecting perceptual groupings in textures by continuity considerations
NASA Technical Reports Server (NTRS)
Greene, Richard J.
1990-01-01
A generalization is presented for the second derivative of a Gaussian D(sup 2)G operator to apply to problems of perceptual organization involving textures. Extensions to other problems of perceptual organization are evident and a new research direction can be established. The technique presented is theoretically pleasing since it has the potential of unifying the entire area of image segmentation under the mathematical notion of continuity and presents a single algorithm to form perceptual groupings where many algorithms existed previously. The eventual impact on both the approach and technique of image processing segmentation operations could be significant.
History Matters: Incremental Ontology Reasoning Using Modules
NASA Astrophysics Data System (ADS)
Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny
The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.
Applications of asynoptic space - Time Fourier transform methods to scanning satellite measurements
NASA Technical Reports Server (NTRS)
Lait, Leslie R.; Stanford, John L.
1988-01-01
A method proposed by Salby (1982) for computing the zonal space-time Fourier transform of asynoptically acquired satellite data is discussed. The method and its relationship to other techniques are briefly described, and possible problems in applying it to real data are outlined. Examples of results obtained using this technique are given which demonstrate its sensitivity to small-amplitude signals. A number of waves are found which have previously been observed as well as two not heretofore reported. A possible extension of the method which could increase temporal and longitudinal resolution is described.
1987-03-31
processors . The symmetry-breaking algorithms give efficient ways to convert probabilistic algorithms to deterministic algorithms. Some of the...techniques have been applied to construct several efficient linear- processor algorithms for graph problems, including an O(lg* n)-time algorithm for (A + 1...On n-node graphs, the algorithm works in O(log 2 n) time using only n processors , in contrast to the previous best algorithm which used about n3
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
A minimal multiconfigurational technique.
Fernández Rico, J; Paniagua, M; GarcíA De La Vega, J M; Fernández-Alonso, J I; Fantucci, P
1986-04-01
A direct minimization method previously presented by the authors is applied here to biconfigurational wave functions. A very moderate increasing in the time by iteration with respect to the one-determinant calculation and good convergence properties have been found. So qualitatively correct studies on singlet systems with strong biradical character can be performed with a cost similar to that required by Hartree-Fock calculations. Copyright © 1986 John Wiley & Sons, Inc.
Multiobjective Resource-Constrained Project Scheduling with a Time-Varying Number of Tasks
Abello, Manuel Blanco
2014-01-01
In resource-constrained project scheduling (RCPS) problems, ongoing tasks are restricted to utilizing a fixed number of resources. This paper investigates a dynamic version of the RCPS problem where the number of tasks varies in time. Our previous work investigated a technique called mapping of task IDs for centroid-based approach with random immigrants (McBAR) that was used to solve the dynamic problem. However, the solution-searching ability of McBAR was investigated over only a few instances of the dynamic problem. As a consequence, only a small number of characteristics of McBAR, under the dynamics of the RCPS problem, were found. Further, only a few techniques were compared to McBAR with respect to its solution-searching ability for solving the dynamic problem. In this paper, (a) the significance of the subalgorithms of McBAR is investigated by comparing McBAR to several other techniques; and (b) the scope of investigation in the previous work is extended. In particular, McBAR is compared to a technique called, Estimation Distribution Algorithm (EDA). As with McBAR, EDA is applied to solve the dynamic problem, an application that is unique in the literature. PMID:24883398
Probing transmembrane mechanical coupling and cytomechanics using magnetic twisting cytometry
NASA Technical Reports Server (NTRS)
Wang, N.; Ingber, D. E.
1995-01-01
We recently developed a magnetic twisting cytometry technique that allows us to apply controlled mechanical stresses to specific cell surface receptors using ligand-coated ferromagnetic microbeads and to simultaneously measure the mechanical response in living cells. Using this technique, we have previously shown the following: (i) beta 1 integrin receptors mediate mechanical force transfer across the cell surface and to the cytoskeleton, whereas other transmembrane receptors (e.g., scavenger receptors) do not; (ii) cytoskeletal stiffness increases in direct proportion to the level of stress applied to integrins; and (iii) the slope of this linear stiffening response differs depending on the shape of the cell. We now show that different integrins (beta 1, alpha V beta 3, alpha V, alpha 5, alpha 2) and other transmembrane receptors (scavenger receptor, platelet endothelial cell adhesion molecule) differ in their ability to mediate force transfer across the cell surface. In addition, the linear stiffening behavior previously observed in endothelial cells was found to be shared by other cell types. Finally, we demonstrate that dynamic changes in cell shape that occur during both cell spreading and retraction are accompanied by coordinate changes in cytoskeletal stiffness. Taken together, these results suggest that the magnetic twisting cytometry technique may be a powerful and versatile tool for studies analyzing the molecular basis of transmembrane mechanical coupling to the cytoskeleton as well as dynamic relations between changes in cytoskeletal structure and alterations in cell form and function.
Calahorra, Yonatan; Smith, Michael; Datta, Anuja; Benisty, Hadas; Kar-Narayan, Sohini
2017-12-14
There has been tremendous interest in piezoelectricity at the nanoscale, for example in nanowires and nanofibers where piezoelectric properties may be enhanced or controllably tuned, thus necessitating robust characterization techniques of piezoelectric response in nanomaterials. Piezo-response force microscopy (PFM) is a well-established scanning probe technique routinely used to image piezoelectric/ferroelectric domains in thin films, however, its applicability to nanoscale objects is limited due to the requirement for physical contact with an atomic force microscope (AFM) tip that may cause dislocation or damage, particularly to soft materials, during scanning. Here we report a non-destructive PFM (ND-PFM) technique wherein the tip is oscillated into "discontinuous" contact during scanning, while applying an AC bias between tip and sample and extracting the piezoelectric response for each contact point by monitoring the resulting localized deformation at the AC frequency. ND-PFM is successfully applied to soft polymeric (poly-l-lactic acid) nanowires, as well as hard ceramic (barium zirconate titanate-barium calcium titanate) nanowires, both previously inaccessible by conventional PFM. Our ND-PFM technique is versatile and compatible with commercial AFMs, and can be used to correlate piezoelectric properties of nanomaterials with their microstructural features thus overcoming key characterisation challenges in the field.
Márquez, Cristina; López, M Isabel; Ruisánchez, Itziar; Callao, M Pilar
2016-12-01
Two data fusion strategies (high- and mid-level) combined with a multivariate classification approach (Soft Independent Modelling of Class Analogy, SIMCA) have been applied to take advantage of the synergistic effect of the information obtained from two spectroscopic techniques: FT-Raman and NIR. Mid-level data fusion consists of merging some of the previous selected variables from the spectra obtained from each spectroscopic technique and then applying the classification technique. High-level data fusion combines the SIMCA classification results obtained individually from each spectroscopic technique. Of the possible ways to make the necessary combinations, we decided to use fuzzy aggregation connective operators. As a case study, we considered the possible adulteration of hazelnut paste with almond. Using the two-class SIMCA approach, class 1 consisted of unadulterated hazelnut samples and class 2 of samples adulterated with almond. Models performance was also studied with samples adulterated with chickpea. The results show that data fusion is an effective strategy since the performance parameters are better than the individual ones: sensitivity and specificity values between 75% and 100% for the individual techniques and between 96-100% and 88-100% for the mid- and high-level data fusion strategies, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Kataoka, Tomoya; Hinata, Hirofumi; Kako, Shin'ichiro
2012-09-01
We have developed a technique for detecting the pixels of colored macro plastic debris (plastic pixels) using photographs taken by a webcam installed on Sodenohama beach, Tobishima Island, Japan. The technique involves generating color references using a uniform color space (CIELUV) to detect plastic pixels and removing misdetected pixels by applying a composite image method. This technique demonstrated superior performance in terms of detecting plastic pixels of various colors compared to the previous method which used the lightness values in the CIELUV color space. We also obtained a 10-month time series of the quantity of plastic debris by combining a projective transformation with this technique. By sequential monitoring of plastic debris quantity using webcams, it is possible to clean up beaches systematically, to clarify the transportation processes of plastic debris in oceans and coastal seas and to estimate accumulation rates on beaches. Copyright © 2012 Elsevier Ltd. All rights reserved.
Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments
NASA Technical Reports Server (NTRS)
Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.
1975-01-01
Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.
Analysis of combustion spectra containing organ pipe tone by cepstral techniques
NASA Technical Reports Server (NTRS)
Miles, J. H.; Wasserbauer, C. A.
1982-01-01
Signal reinforcements and cancellations due to standing waves may distort constant bandwidth combustion spectra. Cepstral techniques previously applied to the ground reflection echo problem are used to obtain smooth broadband data and information on combustion noise propagation. Internal fluctuating pressure measurements made using a J47 combustor attached to a 6.44 m long duct are analyzed. Measurements made with Jet A and hydrogen fuels are compared. The acoustic power levels inferred from the measurements are presented for a range of low heat release rate operating conditions near atmospheric pressure. For these cases, the variation with operating condition of the overall acoustic broadband power level for both hydrogen and Jet A fuels is consistent with previous results showing it was proportional to the square of the heat release rate. However, the overall acoustic broadband power level generally is greater for hydrogen than for Jet A.
Nonlinear tuning techniques of plasmonic nano-filters
NASA Astrophysics Data System (ADS)
Kotb, Rehab; Ismail, Yehea; Swillam, Mohamed A.
2015-02-01
In this paper, a fitting model to the propagation constant and the losses of Metal-Insulator-Metal (MIM) plasmonic waveguide is proposed. Using this model, the modal characteristics of MIM plasmonic waveguide can be solved directly without solving Maxwell's equations from scratch. As a consequence, the simulation time and the computational cost that are needed to predict the response of different plasmonic structures can be reduced significantly. This fitting model is used to develop a closed form model that describes the behavior of a plasmonic nano-filter. Easy and accurate mechanisms to tune the filter are investigated and analyzed. The filter tunability is based on using a nonlinear dielectric material with Pockels or Kerr effect. The tunability is achieved by applying an external voltage or through controlling the input light intensity. The proposed nano-filter supports both red and blue shift in the resonance response depending on the type of the used non-linear material. A new approach to control the input light intensity by applying an external voltage to a previous stage is investigated. Therefore, the filter tunability to a stage that has Kerr material can be achieved by applying voltage to a previous stage that has Pockels material. Using this method, the Kerr effect can be achieved electrically instead of varying the intensity of the input source. This technique enhances the ability of the device integration for on-chip applications. Tuning the resonance wavelength with high accuracy, minimum insertion loss and high quality factor is obtained using these approaches.
Umakanthan, Ramanan; Haglund, Nicholas A; Stulak, John M; Joyce, Lyle D; Ahmad, Rashid; Keebler, Mary E; Maltais, Simon
2013-01-01
Advances in mechanical circulatory support have been critical in bridging patients awaiting heart transplantation. In addition, improvement in device durability has enabled left ventricular assist device therapy to be applied as destination therapy in those not felt to be transplant candidate. Because of the increasing complexity of patients, there continues to be a need for alternative strategies for device implantation to bridge high-risk patients awaiting heart transplantation, wherein the risks of numerous previous sternotomies may be prohibitive. We present a unique technique for placement of the HeartWare ventricular assist device via left anterior thoracotomy to the descending aorta in a patient awaiting heart transplantation with a history of multiple previous sternotomies.
Preparing Colorful Astronomical Images II
NASA Astrophysics Data System (ADS)
Levay, Z. G.; Frattare, L. M.
2002-12-01
We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.
Chemodynamical Clustering Applied to APOGEE Data: Rediscovering Globular Clusters
NASA Astrophysics Data System (ADS)
Chen, Boquan; D’Onghia, Elena; Pardy, Stephen A.; Pasquali, Anna; Bertelli Motta, Clio; Hanlon, Bret; Grebel, Eva K.
2018-06-01
We have developed a novel technique based on a clustering algorithm that searches for kinematically and chemically clustered stars in the APOGEE DR12 Cannon data. As compared to classical chemical tagging, the kinematic information included in our methodology allows us to identify stars that are members of known globular clusters with greater confidence. We apply our algorithm to the entire APOGEE catalog of 150,615 stars whose chemical abundances are derived by the Cannon. Our methodology found anticorrelations between the elements Al and Mg, Na and O, and C and N previously identified in the optical spectra in globular clusters, even though we omit these elements in our algorithm. Our algorithm identifies globular clusters without a priori knowledge of their locations in the sky. Thus, not only does this technique promise to discover new globular clusters, but it also allows us to identify candidate streams of kinematically and chemically clustered stars in the Milky Way.
Optical Sensing of the Fatigue Damage State of CFRP under Realistic Aeronautical Load Sequences
Zuluaga-Ramírez, Pablo; Arconada, Álvaro; Frövel, Malte; Belenguer, Tomás; Salazar, Félix
2015-01-01
We present an optical sensing methodology to estimate the fatigue damage state of structures made of carbon fiber reinforced polymer (CFRP), by measuring variations on the surface roughness. Variable amplitude loads (VAL), which represent realistic loads during aeronautical missions of fighter aircraft (FALSTAFF) have been applied to coupons until failure. Stiffness degradation and surface roughness variations have been measured during the life of the coupons obtaining a Pearson correlation of 0.75 between both variables. The data were compared with a previous study for Constant Amplitude Load (CAL) obtaining similar results. Conclusions suggest that the surface roughness measured in strategic zones is a useful technique for structural health monitoring of CFRP structures, and that it is independent of the type of load applied. Surface roughness can be measured in the field by optical techniques such as speckle, confocal perfilometers and interferometry, among others. PMID:25760056
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Characterization of agricultural land using singular value decomposition
NASA Astrophysics Data System (ADS)
Herries, Graham M.; Danaher, Sean; Selige, Thomas
1995-11-01
A method is defined and tested for the characterization of agricultural land from multi-spectral imagery, based on singular value decomposition (SVD) and key vector analysis. The SVD technique, which bears a close resemblance to multivariate statistic techniques, has previously been successfully applied to problems of signal extraction for marine data and forestry species classification. In this study the SVD technique is used as a classifier for agricultural regions, using airborne Daedalus ATM data, with 1 m resolution. The specific region chosen is an experimental research farm in Bavaria, Germany. This farm has a large number of crops, within a very small region and hence is not amenable to existing techniques. There are a number of other significant factors which render existing techniques such as the maximum likelihood algorithm less suitable for this area. These include a very dynamic terrain and tessellated pattern soil differences, which together cause large variations in the growth characteristics of the crops. The SVD technique is applied to this data set using a multi-stage classification approach, removing unwanted land-cover classes one step at a time. Typical classification accuracy's for SVD are of the order of 85-100%. Preliminary results indicate that it is a fast and efficient classifier with the ability to differentiate between crop types such as wheat, rye, potatoes and clover. The results of characterizing 3 sub-classes of Winter Wheat are also shown.
Alonso-Perez, Jose Luis; Lopez-Lopez, Almudena; La Touche, Roy; Lerma-Lara, Sergio; Suarez, Emilio; Rojas, Javier; Bishop, Mark D; Villafañe, Jorge Hugo; Fernández-Carnero, Josué
2017-10-01
The purpose of this study was to evaluate the extent to which psychological factors interact with a particular manual therapy (MT) technique to induce hypoalgesia in healthy subjects. Seventy-five healthy volunteers (36 female, 39 males), were recruited in this double-blind, controlled and parallel study. Subjects were randomly assigned to receive: High velocity low amplitude technique (HVLA), joint mobilization, or Cervical Lateral glide mobilization (CLGM). Pressure pain threshold (PPT) over C7 unilaterally, trapezius muscle and lateral epicondyle bilaterally, were measured prior to single technique MT was applied and immediately after to applied MT. Pain catastrophizing, depression, anxiety and kinesiophobia were evaluated before treatment. The results indicate that hypoalgesia was observed in all groups after treatment in the neck and elbow region (P < 0.05), but mobilization induces more hypoalgesic effects. Catastrophizing interacted with change over time in PPT, for changes in C7 and in manipulation group. All the MT techniques studied produced local and segmental hypoalgesic effects, supporting the results of previous studies studying the individual interventions. Interaction between catastrophizing and HVLA technique suggest that whether catastrophizing level is low or medium, the chance of success is high, but high levels of catastrophizing may result in poor outcome after HVLA intervention. ClinicalTrials.gov Registration Number: NCT02782585. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Perrin, Marshall D.; Ghez, A. M.
2009-05-01
Learner-centered interactive instruction methods now have a proven track record in improving learning in "Astro 101" courses for non-majors, but have rarely been applied to higher-level astronomy courses. Can we hope for similar gains in classes aimed at astrophysics majors, or is the subject matter too fundamentally different for those techniques to apply? We present here an initial report on an updated calculus-based Introduction to Astrophysics class at UCLA that suggests such techniques can indeed result in increased learning for major students. We augmented the traditional blackboard-derivation lectures and challenging weekly problem sets by adding online questions on pre-reading assignments (''just-in-time teaching'') and frequent multiple-choice questions in class ("Think-Pair-Share''). We describe our approach, and present examples of the new Think-Pair-Share questions developed for this more sophisticated material. Our informal observations after one term are that with this approach, students are more engaged and alert, and score higher on exams than typical in previous years. This is anecdotal evidence, not hard data yet, and there is clearly a vast amount of work to be done in this area. But our first impressions strongly encourage us that interactive methods should be able improve the astrophysics major just as they have improved Astro 101.
Health Risk-Based Assessment and Management of Heavy Metals-Contaminated Soil Sites in Taiwan
Lai, Hung-Yu; Hseu, Zeng-Yei; Chen, Ting-Chien; Chen, Bo-Ching; Guo, Horng-Yuh; Chen, Zueng-Sang
2010-01-01
Risk-based assessment is a way to evaluate the potential hazards of contaminated sites and is based on considering linkages between pollution sources, pathways, and receptors. These linkages can be broken by source reduction, pathway management, and modifying exposure of the receptors. In Taiwan, the Soil and Groundwater Pollution Remediation Act (SGWPR Act) uses one target regulation to evaluate the contamination status of soil and groundwater pollution. More than 600 sites contaminated with heavy metals (HMs) have been remediated and the costs of this process are always high. Besides using soil remediation techniques to remove contaminants from these sites, the selection of possible remediation methods to obtain rapid risk reduction is permissible and of increasing interest. This paper discusses previous soil remediation techniques applied to different sites in Taiwan and also clarified the differences of risk assessment before and after soil remediation obtained by applying different risk assessment models. This paper also includes many case studies on: (1) food safety risk assessment for brown rice growing in a HMs-contaminated site; (2) a tiered approach to health risk assessment for a contaminated site; (3) risk assessment for phytoremediation techniques applied in HMs-contaminated sites; and (4) soil remediation cost analysis for contaminated sites in Taiwan. PMID:21139851
Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample
NASA Astrophysics Data System (ADS)
Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine
2017-10-01
In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.
Extracting attosecond delays from spectrally overlapping interferograms
NASA Astrophysics Data System (ADS)
Jordan, Inga; Wörner, Hans Jakob
2018-02-01
Attosecond interferometry is becoming an increasingly popular technique for measuring the dynamics of photoionization in real time. Whereas early measurements focused on atomic systems with very simple photoelectron spectra, the technique is now being applied to more complex systems including isolated molecules and solids. The increase in complexity translates into an augmented spectral congestion, unavoidably resulting in spectral overlap in attosecond interferograms. Here, we discuss currently used methods for phase retrieval and introduce two new approaches for determining attosecond photoemission delays from spectrally overlapping photoelectron spectra. We show that the previously used technique, consisting in the spectral integration of the areas of interest, does in general not provide reliable results. Our methods resolve this problem, thereby opening the technique of attosecond interferometry to complex systems and fully exploiting its specific advantages in terms of spectral resolution compared to attosecond streaking.
Bardus, Marco; van Beurden, Samantha B; Smith, Jane R; Abraham, Charles
2016-03-10
There are thousands of apps promoting dietary improvement, increased physical activity (PA) and weight management. Despite a growing number of reviews in this area, popular apps have not been comprehensively analysed in terms of features related to engagement, functionality, aesthetics, information quality, and content, including the types of change techniques employed. The databases containing information about all Health and Fitness apps on GP and iTunes (7,954 and 25,491 apps) were downloaded in April 2015. Database filters were applied to select the most popular apps available in both stores. Two researchers screened the descriptions selecting only weight management apps. Features, app quality and content were independently assessed using the Mobile App Rating Scale (MARS) and previously-defined categories of techniques relevant to behaviour change. Inter-coder reliabilities were calculated, and correlations between features explored. Of the 23 popular apps included in the review 16 were free (70%), 15 (65%) addressed weight control, diet and PA combined; 19 (83%) allowed behavioural tracking. On 5-point MARS scales, apps were of average quality (Md = 3.2, IQR = 1.4); "functionality" (Md = 4.0, IQR = 1.1) was the highest and "information quality" (Md = 2.0, IQR = 1.1) was the lowest domain. On average, 10 techniques were identified per app (range: 1-17) and of the 34 categories applied, goal setting and self-monitoring techniques were most frequently identified. App quality was positively correlated with number of techniques included (rho = .58, p < .01) and number of "technical" features (rho = .48, p < .05), which was also associated with the number of techniques included (rho = .61, p < .01). Apps that provided tracking used significantly more techniques than those that did not. Apps with automated tracking scored significantly higher in engagement, aesthetics, and overall MARS scores. Those that used change techniques previously associated with effectiveness (i.e., goal setting, self-monitoring and feedback) also had better "information quality". Popular apps assessed have overall moderate quality and include behavioural tracking features and a range of change techniques associated with behaviour change. These apps may influence behaviour, although more attention to information quality and evidence-based content are warranted to improve their quality.
Feasibility study for automatic reduction of phase change imagery
NASA Technical Reports Server (NTRS)
Nossaman, G. O.
1971-01-01
The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.
NASA Astrophysics Data System (ADS)
Zhou, Xiang
Using an innovative portable holographic inspection and testing system (PHITS) developed at the Australian Defence Force Academy, fatigue cracks in riveted lap joints can be detected by visually inspecting the abnormal fringe changes recorded on holographic interferograms. In this thesis, for automatic crack detection, some modern digital image processing techniques are investigated and applied to holographic interferogram evaluation. Fringe analysis algorithms are developed for identification of the crack-induced fringe changes. Theoretical analysis of PHITS and riveted lap joints and two typical experiments demonstrate that the fatigue cracks in lightly-clamped joints induce two characteristic fringe changes: local fringe discontinuities at the cracking sites; and the global crescent fringe distribution near to the edge of the rivet hole. Both of the fringe features are used for crack detection in this thesis. As a basis of the fringe feature extraction, an algorithm for local fringe orientation calculation is proposed. For high orientation accuracy and computational efficiency, Gaussian gradient filtering and neighboring direction averaging are used to minimize the effects of image background variations and random noise. The neighboring direction averaging is also used to approximate the fringe directions in centerlines of bright and dark fringes. Experimental results indicate that for high orientation accuracy the scales of the Gaussian filter and neighboring direction averaging should be chosen according to the local fringe spacings. The orientation histogram technique is applied to detect the local fringe discontinuity due to the fatigue cracks. The Fourier descriptor technique is used to characterize the global fringe distribution change from a circular to a crescent distribution with the fatigue crack growth. Experiments and computer simulations are conducted to analyze the detectability and reliability of crack detection using the two techniques. Results demonstrate that the Fourier descriptor technique is more promising in the detection of the short cracks near the edge of the rivet head. However, it is not as reliable as the fringe orientation technique for detection of the long through cracks. For reliability, both techniques should be used in practical crack detection. Neither the Fourier descriptor technique nor the orientation histogram technique have been previously applied to holographic interferometry. While this work related primarily to interferograms of cracked rivets, the techniques would be readily applied to other areas of fringe pattern analysis.
Photoactivated methods for enabling cartilage-to-cartilage tissue fixation
NASA Astrophysics Data System (ADS)
Sitterle, Valerie B.; Roberts, David W.
2003-06-01
The present study investigates whether photoactivated attachment of cartilage can provide a viable method for more effective repair of damaged articular surfaces by providing an alternative to sutures, barbs, or fibrin glues for initial fixation. Unlike artificial materials, biological constructs do not possess the initial strength for press-fitting and are instead sutured or pinned in place, typically inducing even more tissue trauma. A possible alternative involves the application of a photosensitive material, which is then photoactivated with a laser source to attach the implant and host tissues together in either a photothermal or photochemical process. The photothermal version of this method shows potential, but has been almost entirely applied to vascularized tissues. Cartilage, however, exhibits several characteristics that produce appreciable differences between applying and refining these techniques when compared to previous efforts involving vascularized tissues. Preliminary investigations involving photochemical photosensitizers based on singlet oxygen and electron transfer mechanisms are discussed, and characterization of the photodynamic effects on bulk collagen gels as a simplified model system using FTIR is performed. Previous efforts using photothermal welding applied to cartilaginous tissues are reviewed.
Mishiro, Tsuyoshi; Shibagaki, Kotaro; Matsuda, Kayo; Fukuyama, Chika; Okada, Mayumi; Mikami, Hironobu; Izumi, Daisuke; Yamashita, Noritsugu; Okimoto, Eiko; Fukuda, Naoki; Aimi, Masahito; Fukuba, Nobuhiko; Oshima, Naoki; Takanashi, Toshihiro; Matsubara, Takeshi; Ishimura, Norihisa; Ishihara, Shunji; Kinoshita, Yoshikazu
2016-08-01
In recent years, treatment techniques in which polyglycolic acid sheets are applied to various situations with fibrin glue have exhibited great clinical potential, and previous studies have reported safety and efficacy. We describe closure of a non-healing perforated duodenal ulcer with the use of a polyglycolic acid sheet and fibrin glue in an elderly patient who was not a candidate for surgery.
Human Splice-Site Prediction with Deep Neural Networks.
Naito, Tatsuhiko
2018-04-18
Accurate splice-site prediction is essential to delineate gene structures from sequence data. Several computational techniques have been applied to create a system to predict canonical splice sites. For classification tasks, deep neural networks (DNNs) have achieved record-breaking results and often outperformed other supervised learning techniques. In this study, a new method of splice-site prediction using DNNs was proposed. The proposed system receives an input sequence data and returns an answer as to whether it is splice site. The length of input is 140 nucleotides, with the consensus sequence (i.e., "GT" and "AG" for the donor and acceptor sites, respectively) in the middle. Each input sequence model is applied to the pretrained DNN model that determines the probability that an input is a splice site. The model consists of convolutional layers and bidirectional long short-term memory network layers. The pretraining and validation were conducted using the data set tested in previously reported methods. The performance evaluation results showed that the proposed method can outperform the previous methods. In addition, the pattern learned by the DNNs was visualized as position frequency matrices (PFMs). Some of PFMs were very similar to the consensus sequence. The trained DNN model and the brief source code for the prediction system are uploaded. Further improvement will be achieved following the further development of DNNs.
Lightweight and Statistical Techniques for Petascale PetaScale Debugging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
Characterization of hydrogel printer for direct cell-laden scaffolds
NASA Astrophysics Data System (ADS)
Whulanza, Yudan; Arsyan, Rendria; Saragih, Agung Shamsuddin
2018-02-01
The additive manufacturing technology has been massively developed since the last decade. The technology was previously known as rapid prototyping techniques that aimed to produce a prototyping product in fast and economical way. Currently, this technique is also applied to fabricate microstructure utilized in tissue engineering technology. Here, we introduce a 3D printer which using hydrogel gelatin to realize cell laden scaffold with dimension around 50-100 µm. However, in order to fabricate such a precise dimension, an optimum working parameters are required to control the physical properties of gelatin. At the end of our study, we formulated the best parameters to perform the product as we desired.
Fundamentals and techniques of nonimaging optics for solar energy concentration
NASA Astrophysics Data System (ADS)
Winston, R.; Gallagher, J. J.
1980-05-01
The properties of a variety of new and previously known nonimaging optical configurations were investigated. A thermodynamic model which explains quantitatively the enhancement of effective absorptance of gray body receivers through cavity effects was developed. The classic method of Liu and Jordan, which allows one to predict the diffuse sunlight levels through correlation with the total and direct fraction was revised and updated and applied to predict the performance of nonimaging solar collectors. The conceptual design for an optimized solar collector which integrates the techniques of nonimaging concentration with evacuated tube collector technology was carried out and is presently the basis for a separately funded hardware development project.
A rapid technique for the histological examination of large ovarian follicles.
Driancourt, M A; Mariana, J C; Palmer, E
1981-01-01
A rapid technique for counting and classifying large ovarian follicles of domestic animals is described. Using a cryostat, 250-micrograms thick sections were cut from the frozen ovary; an image of the surface of each ovarian section was recorded on videotape. By replaying the videotape, the largest profile of each follicle larger than 1 mm in diameter was readily identified and measured. The presence or absence of atresia was determined by applying standard histological methods to fragments of individual follicles taken from the frozen sections. The results obtained are similar to those found using previous methods and demand only one-quarter of the time.
Ovonic switching in tin selenide thin films. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Baxter, C. R.
1974-01-01
Amorphous tin selenide thin films which possess Ovonic switching properties were fabricated using vacuum deposition techniques. Results obtained indicate that memory type Ovonic switching does occur in these films the energy density required for switching from a high impedance to a low impedance state is dependent on the spacing between the electrodes of the device. The switching is also function of the magnitude of the applied voltage pulse. A completely automated computer controlled testing procedure was developed which allows precise control over the shape of the applied voltage switching pulse. A survey of previous experimental and theoretical work in the area of Ovonic switching is also presented.
NASA Astrophysics Data System (ADS)
Sultanov, R. A.; Guster, D.; Adhukari, S. K.
2011-05-01
A possibility of correct description of non-symmetrical HD+H2 collision at low temperatures (T≤300 K) is considered by applying symmetrical H2-H2 potential energy surface (PES) [Diep, P. & Johnson, K. 2000, J. Chem. Phys. 113, 3480 (DJ PES)]. With the use of a special mathematical transformation technique, which was applied to this surface, and a quantum dynamical method we obtained a quite satisfactory agreement with previous results when another H2-H2 PES was used [Boothroyd, A.I. et al. 2002, J. Chem. Phys. 116, 666 (BMKP PES)].
Electro-pumped whispering gallery mode ZnO microlaser array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, G. Y.; State Key Laboratory of Bioelectronics, School of Electronic Science and Engineering, Southeast University, Nanjing 210096; Li, J. T.
2015-01-12
By employing vapor-phase transport method, ZnO microrods are fabricated and directly assembled on p-GaN substrate to form a heterostructural microlaser array, which avoids of the relatively complicated etching process comparing previous work. Under applied forward bias, whispering gallery mode ZnO ultraviolet lasing is obtained from the as-fabricated heterostructural microlaser array. The device's electroluminescence originates from three distinct electron-hole recombination processes in the heterojunction interface, and whispering gallery mode ultraviolet lasing is obtained when the applied voltage is beyond the lasing threshold. This work may present a significant step towards future fabrication of a facile technique for micro/nanolasers.
Nanostructure studies of strongly correlated materials.
Wei, Jiang; Natelson, Douglas
2011-09-01
Strongly correlated materials exhibit an amazing variety of phenomena, including metal-insulator transitions, colossal magnetoresistance, and high temperature superconductivity, as strong electron-electron and electron-phonon couplings lead to competing correlated ground states. Recently, researchers have begun to apply nanostructure-based techniques to this class of materials, examining electronic transport properties on previously inaccessible length scales, and applying perturbations to drive systems out of equilibrium. We review progress in this area, particularly emphasizing work in transition metal oxides (Fe(3)O(4), VO(2)), manganites, and high temperature cuprate superconductors. We conclude that such nanostructure-based studies have strong potential to reveal new information about the rich physics at work in these materials.
A human performance modelling approach to intelligent decision support systems
NASA Technical Reports Server (NTRS)
Mccoy, Michael S.; Boys, Randy M.
1987-01-01
Manned space operations require that the many automated subsystems of a space platform be controllable by a limited number of personnel. To minimize the interaction required of these operators, artificial intelligence techniques may be applied to embed a human performance model within the automated, or semi-automated, systems, thereby allowing the derivation of operator intent. A similar application has previously been proposed in the domain of fighter piloting, where the demand for pilot intent derivation is primarily a function of limited time and high workload rather than limited operators. The derivation and propagation of pilot intent is presented as it might be applied to some programs.
Critical Evaluation of Soil Pore Water Extraction Methods on a Natural Soil
NASA Astrophysics Data System (ADS)
Orlowski, Natalie; Pratt, Dyan; Breuer, Lutz; McDonnell, Jeffrey
2017-04-01
Soil pore water extraction is an important component in ecohydrological studies for the measurement of δ2H and δ18O. The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of commonly applied lab-based soil water extraction techniques on a natural soil: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and two types of cryogenic extraction systems. We applied these extraction methods to a natural summer-dry (gravimetric water contents ranging from 8% to 15%) glacio-lacustrine, moderately fine textured clayey soil; excavated in 10 cm sampling increments to a depth of 1 meter. Isotope results were analyzed via OA-ICOS and compared for each extraction technique that produced liquid water. From our previous intercomparison study among the same extraction techniques but with standard soils, we discovered that extraction methods are not comparable. We therefore tested the null hypothesis that all extraction techniques would be able to replicate the natural evaporation front in a comparable manner occurring in a summer-dry soil. Our results showed that the extraction technique utilized had a significant effect on the soil water isotopic composition. High pressure mechanical squeezing and vapor equilibration techniques produced similar results with similarly sloped evaporation lines. Due to the nature of soil properties and dryness, centrifugation was unsuccessful in obtaining pore water for isotopic analysis. Cryogenic extraction on both tested techniques produced similar results to each other on a similar sloping evaporation line, but dissimilar with depth.
Umakanthan, Ramanan; Haglund, Nicholas A.; Stulak, John M.; Joyce, Lyle D.; Ahmad, Rashid; Keebler, Mary E.; Maltais, Simon
2014-01-01
Advances in mechanical circulatory support have been critical in bridging patients awaiting heart transplantation. In addition, improvement in device durability has enabled left ventricular assist device therapy to be applied as destination therapy in those not felt to be transplant candidate. Because of the increasing complexity of patients, there continues to be a need for alternative strategies for device implantation to bridge high-risk patients awaiting heart transplantation, wherein the risks of numerous previous sternotomies may be prohibitive. We present a unique technique for placement of the HeartWare ventricular assist device via left anterior thoracotomy to the descending aorta in a patient awaiting heart transplantation with a history of multiple previous sternotomies. PMID:24172273
Stress and blood donation: effects of music and previous donation experience.
Ferguson, E; Singh, A P; Cunningham-Snell, N
1997-05-01
Making a blood donation, especially for first-time donors, can be a stressful experience. These feelings of stress may inhibit donors from returning. This paper applies stress theory to this particular problem. The effects of a stress management intervention (the provision of music) and previous donor experience were examined in relation to pre- and post-donation mood, environmental appraisals and coping behaviour. Results indicated that the provision of music had detrimental effects on environmental appraisals for those who have donated up to two times previously, but beneficial effects for those who had donated three times before. These effects were, to an extent, moderated by coping processes but not perceived control. It is recommended that the provision of music is not used as a stress management technique in the context of blood donation.
Optimal cooperative control synthesis of active displays
NASA Technical Reports Server (NTRS)
Garg, S.; Schmidt, D. K.
1985-01-01
A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.
An Algebra-Based Introductory Computational Neuroscience Course with Lab.
Fink, Christian G
2017-01-01
A course in computational neuroscience has been developed at Ohio Wesleyan University which requires no previous experience with calculus or computer programming, and which exposes students to theoretical models of neural information processing and techniques for analyzing neural data. The exploration of theoretical models of neural processes is conducted in the classroom portion of the course, while data analysis techniques are covered in lab. Students learn to program in MATLAB and are offered the opportunity to conclude the course with a final project in which they explore a topic of their choice within computational neuroscience. Results from a questionnaire administered at the beginning and end of the course indicate significant gains in student facility with core concepts in computational neuroscience, as well as with analysis techniques applied to neural data.
NASA Astrophysics Data System (ADS)
Martínez-Murillo, Juan F.; Remond, Ricardo; Ruiz-Sinoga, José D.
2015-04-01
The study aim was to characterize the vegetation cover in a burned area 22-years ago considering the previous situation to wildfire in 1991 and the current one in 2013. The objectives were to: (i) compare the current and previous vegetation cover to widlfire; (ii) evaluate whether the current vegetation has recovered the previous cover to wildfire; and (iii) determine the spatial variability of vegetation recovery after 22-years since the wildfire. The study area is located in Sierra de las Nieves, South of Spain. It corresponds to an area affected by a wildfire in August 8th, 1991. The burned area was equal to 8156 ha. The burn severity was spatially very high. The main geographic features of the burned area are: mountainous topography (altitudes ranging from 250 m to 1500 m; slope gradient >25%; exposure mainly southfacing); igneous (peridotites), metamorphic (gneiss) and calcareous rocks (limestones); and predominant forest land use (Pinus pinaster sp. woodlands, 10%; pinus opened forest + shrubland, 40%; shrubland, 35%; and bare soil + grassland, 15%). Remote sensing techniques and GIS analysis has been applied to achieve the objectives. Landsat 5 and Landsat 8 images were used: July 13th, 1991 and July 1st, 2013, for the previous wildfire situation and 22-years after, respectively. The 1990 CORINE land cover was also considered to map 1991 land uses prior the wildfire. Likewise, the Andalucía Regional Government wildfire historic records were used to select the burned area and its geographical limit. 1991 and 2013 land cover map were obtained by means of object-oriented classifications. Also, NDVI and PVI1 vegetation indexes were calculated and mapped for both years. Finally, some images transformations and kernel density images were applied to determine the most recovered areas and to map the spatial concentration of bare soil and pine cover areas in 1991 and 2013, respectively. According to the results, the combination of remote sensing and GIS analysis let map the most recovered areas affected by the wildfire in 1991. The vegetation indexes indicated that the vegetation cover in 2013 was still lower than that mapped just before the 1991 widlfire in most of the burned area after 22-years. This result was also confirmed by other techniques applied. Finally, the kernel density surface let identify and locate the most recovered areas of pine cover as well as those areas that still remain totally or partially uncovered (bare soil.
NASA Astrophysics Data System (ADS)
Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.
1996-01-01
A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.
Usability-driven pruning of large ontologies: the case of SNOMED CT.
López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan
2012-06-01
To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.
Aircraft applications of fault detection and isolation techniques
NASA Astrophysics Data System (ADS)
Marcos Esteban, Andres
In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.
Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems
NASA Astrophysics Data System (ADS)
Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.
2018-05-01
Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Robert B. Thomas
1990-01-01
Using a previously treated basin as a control in subsequent paired watershed studies requires the control to be stable. Basin stability can be assessed in many ways, some of which are investigated for the South Fork of Caspar Creek in northern California. This basin is recovering from logging and road building in the early 1970s. Three storm-based discharge...
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
Analysis of transitional separation bubbles on infinite swept wings
NASA Technical Reports Server (NTRS)
Davis, R. L.; Carter, J. E.
1986-01-01
A previously developed two-dimensional local inviscid-viscous interaction technique for the analysis of airfoil transitional separation bubbles, ALESEP (Airfoil Leading Edge Separation), has been extended for the calculation of transitional separation bubbles over infinite swept wings. As part of this effort, Roberts' empirical correlation, which is interpreted as a separated flow empirical extension of Mack's stability theory for attached flows, has been incorporated into the ALESEP procedure for the prediction of the transition location within the separation bubble. In addition, the viscous procedure used in the ALESEP techniques has been modified to allow for wall suction. A series of two-dimensional calculations is presented as a verification of the prediction capability of the interaction techniques with the Roberts' transition model. Numerical tests have shown that this two-dimensional natural transition correlation may also be applied to transitional separation bubbles over infinite swept wings. Results of the interaction procedure are compared with Horton's detailed experimental data for separated flow over a swept plate which demonstrates the accuracy of the present technique. Wall suction has been applied to a similar interaction calculation to demonstrate its effect on the separation bubble. The principal conclusion of this paper is that the prediction of transitional separation bubbles over two-dimensional or infinite swept geometries is now possible using the present interacting boundary layer approach.
Deriving Function-failure Similarity Information for Failure-free Rotorcraft Component Design
NASA Technical Reports Server (NTRS)
Roberts, Rory A.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the design that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. The aircraft design needs to be passed through a general technique to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to certain components, which are described by their functionality. In turn, the failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using the technique proposed in this paper, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. This method was previously applied to a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.
Guidance of Nonlinear Nonminimum-Phase Dynamic Systems
NASA Technical Reports Server (NTRS)
Devasia, Santosh
1996-01-01
The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.
Palkowski, Marek; Bielecki, Wlodzimierz
2017-06-02
RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Gleberzon, Brain J
2002-01-01
In a previous article, the author reported on the recommendations gathered from student projects between 1996 and 1999 investigating their preferences for including certain chiropractic Name technique systems into the curriculum at the Canadian Memorial Chiropractic College (CMCC). These results were found to be congruent with the professional treatment technique used by Canadian chiropractors. This article reports on the data obtained during the 2000 and 2001 academic years, comparing these results to those previously gathered. In addition, because of the implementation of a new curriculum during this time period, there was unique opportunity to observe whether or not student perceptions differed between those students in the `old' curricular program, and those students in the `new' curricular program. The results gathered indicate that students in both curricular programs show an interest in learning Thompson Terminal Point, Activator Methods, Gonstead, and Active Release Therapy techniques in the core curriculum, as an elective, or during continuing educational programs provided by the college. Students continue to show less interest in learning CranioSacral Therapy, SacroOccipital Technique, Logan Basic, Applied Kinesiology and Chiropractic BioPhysics. Over time, student interest has moved away from Palmer HIO and other upper cervical techniques, and students show a declining interest in being offered instruction in either Network Spinal Analysis or Torque Release Techniques. Since these findings reflect the practice activities of Canadian chiropractors they may have implications not only towards pedagogical decision-making processes at CMCC, but they may also influence professional standards of care.
Non-contact thrust stand calibration method for repetitively pulsed electric thrusters.
Wong, Andrea R; Toftul, Alexandra; Polzin, Kurt A; Pearson, J Boise
2012-02-01
A thrust stand calibration technique for use in testing repetitively pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoid to produce a pulsed magnetic field that acts against a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasi-steady average deflection of the thrust stand arm away from the unforced or "zero" position can be related to the average applied force through a simple linear Hooke's law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other. The overall error on the linear regression fit used to determine the calibration coefficient was roughly 1%.
McLaskey, Gregory C.; Lockner, David A.; Kilgore, Brian D.; Beeler, Nicholas M.
2015-01-01
We describe a technique to estimate the seismic moment of acoustic emissions and other extremely small seismic events. Unlike previous calibration techniques, it does not require modeling of the wave propagation, sensor response, or signal conditioning. Rather, this technique calibrates the recording system as a whole and uses a ball impact as a reference source or empirical Green’s function. To correctly apply this technique, we develop mathematical expressions that link the seismic moment $M_{0}$ of internal seismic sources (i.e., earthquakes and acoustic emissions) to the impulse, or change in momentum $\\Delta p $, of externally applied seismic sources (i.e., meteor impacts or, in this case, ball impact). We find that, at low frequencies, moment and impulse are linked by a constant, which we call the force‐moment‐rate scale factor $C_{F\\dot{M}} = M_{0}/\\Delta p$. This constant is equal to twice the speed of sound in the material from which the seismic sources were generated. Next, we demonstrate the calibration technique on two different experimental rock mechanics facilities. The first example is a saw‐cut cylindrical granite sample that is loaded in a triaxial apparatus at 40 MPa confining pressure. The second example is a 2 m long fault cut in a granite sample and deformed in a large biaxial apparatus at lower stress levels. Using the empirical calibration technique, we are able to determine absolute source parameters including the seismic moment, corner frequency, stress drop, and radiated energy of these magnitude −2.5 to −7 seismic events.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
Russell, Joanna; Singer, Brian W; Perry, Justin J; Bacon, Anne
2011-05-01
A collection of more than 70 synthetic organic pigments were analysed using pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS). We report on the analysis of diketo-pyrrolo-pyrrole, isoindolinone and perylene pigments which are classes not previously reported as being analysed by this technique. We also report on a number of azo pigments (2-naphthol, naphthol AS, arylide, diarylide, benzimidazolone and disazo condensation pigments) and phthalocyanine pigments, the Py-GC-MS analysis of which has not been previously reported. The members of each class were found to fragment in a consistent way and the pyrolysis products are reported. The technique was successfully applied to the analysis of paints used by the artist Francis Bacon (1909-1992), to simultaneously identify synthetic organic pigments and synthetic binding media in two samples of paint taken from Bacon's studio and micro-samples taken from three of his paintings and one painting attributed to him.
50 CFR 224.101 - Enumeration of endangered marine and anadromous species.
Code of Federal Regulations, 2012 CFR
2012-10-01
... institutions) and which are identified as fish belonging to the NYB DPS based on genetics analyses, previously... genetics analyses, previously applied tags, previously applied marks, or documentation to verify that the... Carolina DPS based on genetics analyses, previously applied tags, previously applied marks, or...
50 CFR 224.101 - Enumeration of endangered marine and anadromous species.
Code of Federal Regulations, 2013 CFR
2013-10-01
... institutions) and which are identified as fish belonging to the NYB DPS based on genetics analyses, previously... genetics analyses, previously applied tags, previously applied marks, or documentation to verify that the... Carolina DPS based on genetics analyses, previously applied tags, previously applied marks, or...
Analysis of aircraft longitudinal handling qualities
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Robb, Meigan
2014-01-11
Engaging nursing students in the classroom environment positively influences their ability to learn and apply course content to clinical practice. Students are motivated to engage in learning if their learning preferences are being met. The methods nurse educators have used with previous students in the classroom may not address the educational needs of Millennials. This manuscript presents the findings of a pilot study that used the Critical Incident Technique. The purpose of this study was to gain insight into the teaching methods that help the Millennial generation of nursing students feel engaged in the learning process. Students' perceptions of effective instructional approaches are presented in three themes. Implications for nurse educators are discussed.
An analytical approach for predicting pilot induced oscillations
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Non-null full field X-ray mirror metrology using SCOTS: a reflection deflectometry approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su P.; Kaznatcheev K.; Wang, Y.
In a previous paper, the University of Arizona (UA) has developed a measurement technique called: Software Configurable Optical Test System (SCOTS) based on the principle of reflection deflectometry. In this paper, we present results of this very efficient optical metrology method applied to the metrology of X-ray mirrors. We used this technique to measure surface slope errors with precision and accuracy better than 100 nrad (rms) and {approx}200 nrad (rms), respectively, with a lateral resolution of few mm or less. We present results of the calibration of the metrology systems, discuss their accuracy and address the precision in measuring amore » spherical mirror.« less
Time-resolved gamma spectroscopy of single events
NASA Astrophysics Data System (ADS)
Wolszczak, W.; Dorenbos, P.
2018-04-01
In this article we present a method of characterizing scintillating materials by digitization of each individual scintillation pulse followed by digital signal processing. With this technique it is possible to measure the pulse shape and the energy of an absorbed gamma photon on an event-by-event basis. In contrast to time-correlated single photon counting technique, the digital approach provides a faster measurement, an active noise suppression, and enables characterization of scintillation pulses simultaneously in two domains: time and energy. We applied this method to study the pulse shape change of a CsI(Tl) scintillator with energy of gamma excitation. We confirmed previously published results and revealed new details of the phenomenon.
Spatially variant apodization for squinted synthetic aperture radar images.
Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo
2007-08-01
Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.
HPLC assisted Raman spectroscopic studies on bladder cancer
NASA Astrophysics Data System (ADS)
Zha, W. L.; Cheng, Y.; Yu, W.; Zhang, X. B.; Shen, A. G.; Hu, J. M.
2015-04-01
We applied confocal Raman spectroscopy to investigate 12 normal bladder tissues and 30 tumor tissues, and then depicted the spectral differences between the normal and the tumor tissues and the potential canceration mechanism with the aid of the high-performance liquid chromatographic (HPLC) technique. Normal tissues were demonstrated to contain higher tryptophan, cholesterol and lipid content, while bladder tumor tissues were rich in nucleic acids, collagen and carotenoids. In particular, β-carotene, one of the major types of carotenoids, was found through HPLC analysis of the extract of bladder tissues. The statistical software SPSS was applied to classify the spectra of the two types of tissues according to their differences. The sensitivity and specificity of 96.7 and 66.7% were obtained, respectively. In addition, different layers of the bladder wall including mucosa (lumps), muscle and adipose bladder tissue were analyzed by Raman mapping technique in response to previous Raman studies of bladder tissues. All of these will play an important role as a directive tool for the future diagnosis of bladder cancer in vivo.
Lahmann, B; Milanese, L M; Han, W; Gatu Johnson, M; Séguin, F H; Frenje, J A; Petrasso, R D; Hahn, K D; Jones, B
2016-11-01
A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protons at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. These results are in excellent agreement with previous work applied to DT neutrons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lahmann, B.; Milanese, L. M.; Han, W.
A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. In conclusion, these results are in excellent agreement with previous work applied to DT neutrons.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lahmann, B., E-mail: lahmann@mit.edu; Milanese, L. M.; Han, W.
A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. These results are in excellent agreement with previous work applied to DT neutrons.« less
Acute effects of triazolam on false recognition.
Mintzer, M Z; Griffiths, R R
2000-12-01
Neuropsychological, neuroimaging, and electrophysiological techniques have been applied to the study of false recognition; however, psychopharmacological techniques have not been applied. Benzodiazepine sedative/anxiolytic drugs produce memory deficits similar to those observed in organic amnesia and may be useful tools for studying normal and abnormal memory mechanisms. The present double-blind, placebo-controlled repeated measures study examined the acute effects of orally administered triazolam (Halcion; 0.125 and 0.25 mg/70 kg), a benzodiazepine hypnotic, on performance in the Deese (1959)/Roediger-McDermott (1995) false recognition paradigm in 24 healthy volunteers. Paralleling previous demonstrations in amnesic patients, triazolam produced significant dose-related reductions in false recognition rates to nonstudied words associatively related to studied words, suggesting that false recognition relies on normal memory mechanisms impaired in benzodiazepine-induced amnesia. The results also suggested that relative to placebo, triazolam reduced participants' reliance on memory for item-specific versus list-common semantic information and reduced participants' use of remember versus know responses.
NASA Astrophysics Data System (ADS)
Muslim, M. A.; Herowati, A. J.; Sugiharti, E.; Prasetiyo, B.
2018-03-01
A technique to dig valuable information buried or hidden in data collection which is so big to be found an interesting patterns that was previously unknown is called data mining. Data mining has been applied in the healthcare industry. One technique used data mining is classification. The decision tree included in the classification of data mining and algorithm developed by decision tree is C4.5 algorithm. A classifier is designed using applying pessimistic pruning in C4.5 algorithm in diagnosing chronic kidney disease. Pessimistic pruning use to identify and remove branches that are not needed, this is done to avoid overfitting the decision tree generated by the C4.5 algorithm. In this paper, the result obtained using these classifiers are presented and discussed. Using pessimistic pruning shows increase accuracy of C4.5 algorithm of 1.5% from 95% to 96.5% in diagnosing of chronic kidney disease.
Lahmann, B.; Milanese, L. M.; Han, W.; ...
2016-07-20
A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. In conclusion, these results are in excellent agreement with previous work applied to DT neutrons.« less
Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M
2013-09-01
Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fernandez, Carlos; Platero, Carlos; Campoy, Pascual; Aracil, Rafael
1994-11-01
This paper describes some texture-based techniques that can be applied to quality assessment of flat products continuously produced (metal strips, wooden surfaces, cork, textile products, ...). Since the most difficult task is that of inspecting for product appearance, human-like inspection ability is required. A common feature to all these products is the presence of non- deterministic texture on their surfaces. Two main subjects are discussed: statistical techniques for both surface finishing determination and surface defect analysis as well as real-time implementation for on-line inspection in high-speed applications. For surface finishing determination a Gray Level Difference technique is presented to perform over low resolution images, that is, no-zoomed images. Defect analysis is performed by means of statistical texture analysis over defective portions of the surface. On-line implementation is accomplished by means of neural networks. When a defect arises, textural analysis is applied which result in a data-vector, acting as input of a neural net, previously trained in a supervised way. This approach tries to reach on-line performance in automated visual inspection applications when texture is presented in flat product surfaces.
Reducing the orientation influence of Mueller matrix measurements for anisotropic scattering media
NASA Astrophysics Data System (ADS)
Sun, Minghao; He, Honghui; Zeng, Nan; Du, E.; He, Yonghong; Ma, Hui
2014-09-01
Mueller matrix polarimetry techniques contain rich micro-structural information of samples, such as the sizes and refractive indices of scatterers. Recently, Mueller matrix imaging methods have shown great potentials as powerful tools for biomedical diagnosis. However, the orientations of anisotropic fibrous structures in tissues have prominent influence on Mueller matrix measurements, resulting in difficulties for extracting micro-structural information effectively. In this paper, we apply the backscattering Mueller matrix imaging technique to biological samples with different microstructures, such as chicken heart muscle, bovine skeletal muscle, porcine liver and fat tissues. Experimental results show that the directions of the muscle fibers have prominent influence on the Mueller matrix elements. In order to reduce the orientation influence, we adopt the rotation-independent MMT and RLPI parameters, which were proposed in our previous studies, to the tissue samples. Preliminary results in this paper show that the orientation-independent parameters and their statistic features are helpful for analyzing the tissues to obtain their micro-structural properties. Since the micro-structure variations are often related to the pathological changes, the method can be applied to microscope imaging techniques and used to detect abnormal tissues such as cancer and other lesions for diagnosis purposes.
Investigating Montara platform oil spill accident by implementing RST-OIL approach.
NASA Astrophysics Data System (ADS)
Satriano, Valeria; Ciancia, Emanuele; Coviello, Irina; Di Polito, Carmine; Lacava, Teodosio; Pergola, Nicola; Tramutoli, Valerio
2016-04-01
Oil Spills represent one of the most harmful events to marine ecosystems and their timely detection is crucial for their mitigation and management. The potential of satellite data for their detection and monitoring has been largely investigated. Traditional satellite techniques usually identify oil spill presence applying a fixed threshold scheme only after the occurrence of an event, which make them not well suited for their prompt identification. The Robust Satellite Technique (RST) approach, in its oil spill detection version (RST-OIL), being based on the comparison of the latest satellite acquisition with its historical value, previously identified, allows the automatic and near real-time detection of events. Such a technique has been already successfully applied on data from different sources (AVHRR-Advanced Very High Resolution Radiometer and MODIS-Moderate Resolution Imaging Spectroradiometer) showing excellent performance in detecting oil spills both during day- and night-time conditions, with an high level of sensitivity (detection also of low intensity events) and reliability (no false alarm on scene). In this paper, RST-OIL has been implemented on MODIS thermal infrared data for the analysis of the Montara Platform (Timor Sea - Australia) oil spill disaster occurred in August 2009. Preliminary achievements are presented and discussed in this paper.
Lee, Jihoon; Fredriksson, David W.; DeCew, Judson; Drach, Andrew; Yim, Solomon C.
2018-01-01
This study provides an engineering approach for designing an aquaculture cage system for use in constructed channel flow environments. As sustainable aquaculture has grown globally, many novel techniques have been introduced such as those implemented in the global Atlantic salmon industry. The advent of several highly sophisticated analysis software systems enables the development of such novel engineering techniques. These software systems commonly include three-dimensional (3D) drafting, computational fluid dynamics, and finite element analysis. In this study, a combination of these analysis tools is applied to evaluate a conceptual aquaculture system for potential deployment in a power plant effluent channel. The channel is supposedly clean; however, it includes elevated water temperatures and strong currents. The first portion of the analysis includes the design of a fish cage system with specific net solidities using 3D drafting techniques. Computational fluid dynamics is then applied to evaluate the flow reduction through the system from the previously generated solid models. Implementing the same solid models, a finite element analysis is performed on the critical components to assess the material stresses produced by the drag force loads that are calculated from the fluid velocities. PMID:29897954
Kashiwayanagi, M; Shimano, K; Kurihara, K
1996-11-04
The responses of single bullfrog olfactory neurons to various odorants were measured with the whole-cell patch clamp which offers direct information on cellular events and with the ciliary recording technique to obtain stable quantitative data from many neurons. A large portion of single olfactory neurons (about 64% and 79% in the whole-cell recording and in the ciliary recording, respectively) responded to many odorants with quite diverse molecular structures, including both odorants previously indicated to be cAMP-dependent (increasing) and independent odorants. One odorant elicited a response in many cells; e.g. hedione and citralva elicited the response in 100% and 92% of total neurons examined with the ciliary recording technique. To confirm that a single neuron carries different receptors or transduction pathways, the cross-adaptation technique was applied to single neurons. Application of hedione to a single neuron after desensitization of the current in response to lyral or citralva induced an inward current with a similar magnitude to that applied alone. It was suggested that most single olfactory neurons carry multiple receptors and at least dual transduction pathways.
What about the Misgav-Ladach surgical technique in patients with previous cesarean sections?
Bolze, Pierre-Adrien; Massoud, Mona; Gaucherand, Pascal; Doret, Muriel
2013-03-01
The Misgav-Ladach technique is recommended worldwide to perform cesarean sections but there is no consensus about the appropriate technique to use in patients with previous cesarean sections. This study evaluated the feasibility of the Misgav-Ladach technique in patients with previous cesarean sections. This prospective cohort study included all women undergoing cesarean section after 36 weeks of gestation over a 5-month period, with the Misgav-Ladach technique as first choice, whatever the previous number of cesarean sections. Among the 204 patients included, the Misgav-Ladach technique was successful in 100%, 80%, and 65.6% of patients with no, one, and multiple previous cesarean sections, respectively. When successful, the Misgav-Ladach technique was associated with a shorter incision to birth interval in patients with no previous cesarean section compared with patients with one or multiple previous cesarean sections. Anterior rectus aponeurosis fibrosis and severe peritoneal adherences were the two main reasons explaining the Misgav-Ladach technique failure. The Misgav-Ladach technique is possible in over three-fourths of patients with previous cesarean sections with a slight increase in incision to birth interval compared with patients without previous cesarean section. Further studies comparing the Misgav-Ladach and the Pfannenstiel techniques in women with previous cesarean should be done. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Use of tannin anticorrosive reaction primer to improve traditional coating systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matamala, G.; Droguett, G.; Smeltzer, W.
1994-04-01
Different anticorrosive schemes applied over plain or previously shot-blasted surfaces of AISI 1010 (UNS G10100) steel plates were compared. Plates were painted with alkydic, vinylic, and epoxy anticorrosive schemes over metal treated previously with pine tannin reaction primer and over its own schemes without previous primer treatment. Anticorrosive tests were conducted in a salt fog chamber according to ASTM B 117-73. Rusting, blistering, and adhesion were assessed over time. The survey was complemented with potentiodynamic scanning tests in sodium chloride (NaCl) solution with a concentration equivalent to seawater. Corrosion currents were determined using Tafel and polarization resistance techniques. Results showedmore » the reaction primer inhibited corrosion by improving adherence. Advantages over traditional conversion primers formulated in a base of zinc chromate in phosphoric medium were evident.« less
Automatic evaluation of skin histopathological images for melanocytic features
NASA Astrophysics Data System (ADS)
Koosha, Mohaddeseh; Hoseini Alinodehi, S. Pourya; Nicolescu, Mircea; Safaei Naraghi, Zahra
2017-03-01
Successfully detecting melanocyte cells in the skin epidermis has great significance in skin histopathology. Because of the existence of cells with similar appearance to melanocytes in hematoxylin and eosin (HE) images of the epidermis, detecting melanocytes becomes a challenging task. This paper proposes a novel technique for the detection of melanocytes in HE images of the epidermis, based on the melanocyte color features, in the HSI color domain. Initially, an effective soft morphological filter is applied to the HE images in the HSI color domain to remove noise. Then a novel threshold-based technique is applied to distinguish the candidate melanocytes' nuclei. Similarly, the method is applied to find the candidate surrounding halos of the melanocytes. The candidate nuclei are associated with their surrounding halos using the suggested logical and statistical inferences. Finally, a fuzzy inference system is proposed, based on the HSI color information of a typical melanocyte in the epidermis, to calculate the similarity ratio of each candidate cell to a melanocyte. As our review on the literature shows, this is the first method evaluating epidermis cells for melanocyte similarity ratio. Experimental results on various images with different zooming factors show that the proposed method improves the results of previous works.
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
Expert system verification and validation study
NASA Technical Reports Server (NTRS)
French, Scott W.; Hamilton, David
1992-01-01
Five workshops on verification and validation (V&V) of expert systems (ES) where taught during this recent period of performance. Two key activities, previously performed under this contract, supported these recent workshops (1) Survey of state-of-the-practice of V&V of ES and (2) Development of workshop material and first class. The first activity involved performing an extensive survey of ES developers in order to answer several questions regarding the state-of-the-practice in V&V of ES. These questions related to the amount and type of V&V done and the successfulness of this V&V. The next key activity involved developing an intensive hands-on workshop in V&V of ES. This activity involved surveying a large number of V&V techniques, conventional as well as ES specific ones. In addition to explaining the techniques, we showed how each technique could be applied on a sample problem. References were included in the workshop material, and cross referenced to techniques, so that students would know where to go to find additional information about each technique. In addition to teaching specific techniques, we included an extensive amount of material on V&V concepts and how to develop a V&V plan for an ES project. We felt this material was necessary so that developers would be prepared to develop an orderly and structured approach to V&V. That is, they would have a process that supported the use of the specific techniques. Finally, to provide hands-on experience, we developed a set of case study exercises. These exercises were to provide an opportunity for the students to apply all the material (concepts, techniques, and planning material) to a realistic problem.
Calculating intensities using effective Hamiltonians in terms of Coriolis-adapted normal modes.
Karthikeyan, S; Krishnan, Mangala Sunder; Carrington, Tucker
2005-01-15
The calculation of rovibrational transition energies and intensities is often hampered by the fact that vibrational states are strongly coupled by Coriolis terms. Because it invalidates the use of perturbation theory for the purpose of decoupling these states, the coupling makes it difficult to analyze spectra and to extract information from them. One either ignores the problem and hopes that the effect of the coupling is minimal or one is forced to diagonalize effective rovibrational matrices (rather than diagonalizing effective rotational matrices). In this paper we apply a procedure, based on a quantum mechanical canonical transformation for deriving decoupled effective rotational Hamiltonians. In previous papers we have used this technique to compute energy levels. In this paper we show that it can also be applied to determine intensities. The ideas are applied to the ethylene molecule.
Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions
NASA Astrophysics Data System (ADS)
Ilgen, Marc R.
This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.
Refinement of Methods for Evaluation of Near-Hypersingular Integrals in BEM Formulations
NASA Technical Reports Server (NTRS)
Fink, Patricia W.; Khayat, Michael A.; Wilton, Donald R.
2006-01-01
In this paper, we present advances in singularity cancellation techniques applied to integrals in BEM formulations that are nearly hypersingular. Significant advances have been made recently in singularity cancellation techniques applied to 1 R type kernels [M. Khayat, D. Wilton, IEEE Trans. Antennas and Prop., 53, pp. 3180-3190, 2005], as well as to the gradients of these kernels [P. Fink, D. Wilton, and M. Khayat, Proc. ICEAA, pp. 861-864, Torino, Italy, 2005] on curved subdomains. In these approaches, the source triangle is divided into three tangent subtriangles with a common vertex at the normal projection of the observation point onto the source element or the extended surface containing it. The geometry of a typical tangent subtriangle and its local rectangular coordinate system with origin at the projected observation point is shown in Fig. 1. Whereas singularity cancellation techniques for 1 R type kernels are now nearing maturity, the efficient handling of near-hypersingular kernels still needs attention. For example, in the gradient reference above, techniques are presented for computing the normal component of the gradient relative to the plane containing the tangent subtriangle. These techniques, summarized in the transformations in Table 1, are applied at the sub-triangle level and correspond particularly to the case in which the normal projection of the observation point lies within the boundary of the source element. They are found to be highly efficient as z approaches zero. Here, we extend the approach to cover two instances not previously addressed. First, we consider the case in which the normal projection of the observation point lies external to the source element. For such cases, we find that simple modifications to the transformations of Table 1 permit significant savings in computational cost. Second, we present techniques that permit accurate computation of the tangential components of the gradient; i.e., tangent to the plane containing the source element.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Lynch, Chip M; Abdollahi, Behnaz; Fuqua, Joshua D; de Carlo, Alexandra R; Bartholomai, James A; Balgemann, Rayeanne N; van Berkel, Victor H; Frieboes, Hermann B
2017-12-01
Outcomes for cancer patients have been previously estimated by applying various machine learning techniques to large datasets such as the Surveillance, Epidemiology, and End Results (SEER) program database. In particular for lung cancer, it is not well understood which types of techniques would yield more predictive information, and which data attributes should be used in order to determine this information. In this study, a number of supervised learning techniques is applied to the SEER database to classify lung cancer patients in terms of survival, including linear regression, Decision Trees, Gradient Boosting Machines (GBM), Support Vector Machines (SVM), and a custom ensemble. Key data attributes in applying these methods include tumor grade, tumor size, gender, age, stage, and number of primaries, with the goal to enable comparison of predictive power between the various methods The prediction is treated like a continuous target, rather than a classification into categories, as a first step towards improving survival prediction. The results show that the predicted values agree with actual values for low to moderate survival times, which constitute the majority of the data. The best performing technique was the custom ensemble with a Root Mean Square Error (RMSE) value of 15.05. The most influential model within the custom ensemble was GBM, while Decision Trees may be inapplicable as it had too few discrete outputs. The results further show that among the five individual models generated, the most accurate was GBM with an RMSE value of 15.32. Although SVM underperformed with an RMSE value of 15.82, statistical analysis singles the SVM as the only model that generated a distinctive output. The results of the models are consistent with a classical Cox proportional hazards model used as a reference technique. We conclude that application of these supervised learning techniques to lung cancer data in the SEER database may be of use to estimate patient survival time with the ultimate goal to inform patient care decisions, and that the performance of these techniques with this particular dataset may be on par with that of classical methods. Copyright © 2017 Elsevier B.V. All rights reserved.
A Review of the Match Technique as Applied to AASE-2/EASOE and SOLVE/THESEO 2000
NASA Technical Reports Server (NTRS)
Morris, Gary A.; Bojkov, Bojan R.; Lait, Leslie R.; Schoeberl, Mark R.; Rex, Markus
2004-01-01
We apply the GSFC trajectory model with a series of ozonesondes to derive ozone loss rates in the lower stratosphere for the AASE-2/EASOE mission (January - March 1992) and for the SOLVE/THESEO 2000 mission (January - March 2000) in an approach similar to Match. Ozone loss rates are computed by comparing the ozone concentrations provided by ozonesondes launched at the beginning and end of the trajectories connecting the launches. We investigate the sensitivity of the Match results on the various parameters used to reject potential matches in the original Match technique and conclude that only a filter based on potential vorticity changes along the calculated back trajectory seems necessary. Our study also demonstrates that calculated ozone loss rates can vary by up to a factor of two depending upon the precise trajectory paths calculated for each trajectory. As a result an additional systematic error might need to be added to the statistical uncertainties published with previous Match results. The sensitivity to the trajectory path is particularly pronounced in the month of January, the month during which the largest ozone loss rate discrepancies between photochemical models and Match are found. For most of the two study periods, our ozone loss rates agree with those previously published. Notable exceptions are found for January 1992 at 475 K and late February/early March 2000 at 450 K, both periods during which we find less loss than the previous studies. Integrated ozone loss rates in both years compare well with those found in numerous other studies and in a potential vorticity/potential temperature approach shown previously and in this paper. Finally, we suggest an alternate approach to Match using trajectory mapping that appears to more accurately reflect the true uncertainties associated with Match and reduces the dependence upon filters that may bias the results of Match through the rejection of greater than or equal to 80% of the matched sonde pairs and >99% of matched observations.
A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis.
Brassey, Charlotte A; O'Mahoney, Thomas G; Chamberlain, Andrew T; Sellers, William I
2018-02-01
Fossil body mass estimation is a well established practice within the field of physical anthropology. Previous studies have relied upon traditional allometric approaches, in which the relationship between one/several skeletal dimensions and body mass in a range of modern taxa is used in a predictive capacity. The lack of relatively complete skeletons has thus far limited the potential application of alternative mass estimation techniques, such as volumetric reconstruction, to fossil hominins. Yet across vertebrate paleontology more broadly, novel volumetric approaches are resulting in predicted values for fossil body mass very different to those estimated by traditional allometry. Here we present a new digital reconstruction of Australopithecus afarensis (A.L. 288-1; 'Lucy') and a convex hull-based volumetric estimate of body mass. The technique relies upon identifying a predictable relationship between the 'shrink-wrapped' volume of the skeleton and known body mass in a range of modern taxa, and subsequent application to an articulated model of the fossil taxa of interest. Our calibration dataset comprises whole body computed tomography (CT) scans of 15 species of modern primate. The resulting predictive model is characterized by a high correlation coefficient (r 2 = 0.988) and a percentage standard error of 20%, and performs well when applied to modern individuals of known body mass. Application of the convex hull technique to A. afarensis results in a relatively low body mass estimate of 20.4 kg (95% prediction interval 13.5-30.9 kg). A sensitivity analysis on the articulation of the chest region highlights the sensitivity of our approach to the reconstruction of the trunk, and the incomplete nature of the preserved ribcage may explain the low values for predicted body mass here. We suggest that the heaviest of previous estimates would require the thorax to be expanded to an unlikely extent, yet this can only be properly tested when more complete fossils are available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
Applications of neutron radiography for the nuclear power industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Aaron E.; Barton, John P.
The World Conference on Neutron Radiography (WCNR) and International Topical Meeting on Neutron Radiography (ITMNR) series have been running over 35 years. The most recent event, ITMNR-8, focused on industrial applications and was the first time this series was hosted in China. In China, more than twenty new nuclear power plants are in construction and plans have been announced to increase the nuclear capacity further by a factor of three within fifteen years. There are additional prospects in many other nations. Neutron tests were vital during previous developments of materials and components for nuclear power applications, as reported in thismore » conference series. For example a majority of the 140 papers in the Proceedings of the First WCNR are for the benefit of the nuclear power industry. Included are reviews of the diverse techniques being applied in Europe, Japan, the United States, and at many other centers. Many of those techniques are being utilized and advanced to the present time. Neutron radiography of irradiated nuclear fuel provides more comprehensive information about the internal condition of irradiated nuclear fuel than any other non-destructive technique to date. Applications include examination of nuclear waste, nuclear fuels, cladding, control elements, and other critical components. In this paper, the techniques developed and applied internationally for the nuclear power industry since the earliest years are reviewed, and the question is asked whether neutron test techniques can be of value in development of the present and future generations of nuclear power plants world-wide.« less
Paramasivan, Sangeetha; Strong, Sean; Wilson, Caroline; Campbell, Bruce; Blazeby, Jane M; Donovan, Jenny L
2015-03-11
Recruitment to pragmatic randomised controlled trials (RCTs) is acknowledged to be difficult, and few interventions have proved to be effective. Previous qualitative research has consistently revealed that recruiters provide imbalanced information about RCT treatments. However, qualitative research can be time-consuming to apply. Within a programme of research to optimise recruitment and informed consent in challenging RCTs, we developed a simple technique, Q-QAT (Quanti-Qualitative Appointment Timing), to systematically investigate and quantify the imbalance to help identify and address recruitment difficulties. The Q-QAT technique comprised: 1) quantification of time spent discussing the RCT and its treatments using transcripts of audio-recorded recruitment appointments, 2) targeted qualitative research to understand the obstacles to recruitment and 3) feedback to recruiters on opportunities for improvement. This was applied to two RCTs with different clinical contexts and recruitment processes. Comparisons were made across clinical centres, recruiters and specialties. In both RCTs, the Q-QAT technique first identified considerable variations in the time spent by recruiters discussing the RCT and its treatments. The patterns emerging from this initial quantification of recruitment appointments then enabled targeted qualitative research to understand the issues and make suggestions to improve recruitment. In RCT1, presentation of the treatments was balanced, but little time was devoted to describing the RCT. Qualitative research revealed patients would have considered participation, but lacked awareness of the RCT. In RCT2, the balance of treatment presentation varied by specialists and centres. Qualitative research revealed difficulties with equipoise and confidence among recruiters presenting the RCT. The quantitative and qualitative findings were well-received by recruiters and opportunities to improve information provision were discussed. A blind coding exercise across three researchers led to the development of guidelines that can be used to apply the Q-QAT technique to other difficult RCTs. The Q-QAT technique was easy to apply and rapidly identified obstacles to recruitment that could be understood through targeted qualitative research and addressed through feedback. The technique's combination of quantitative and qualitative findings enabled the presentation of a holistic picture of recruitment challenges and added credibility to the feedback process. Note: both RCTs in this manuscript asked to be anonymised, so no trial registration details are provided.
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Applying Sequential Analytic Methods to Self-Reported Information to Anticipate Care Needs.
Bayliss, Elizabeth A; Powers, J David; Ellis, Jennifer L; Barrow, Jennifer C; Strobel, MaryJo; Beck, Arne
2016-01-01
Identifying care needs for newly enrolled or newly insured individuals is important under the Affordable Care Act. Systematically collected patient-reported information can potentially identify subgroups with specific care needs prior to service use. We conducted a retrospective cohort investigation of 6,047 individuals who completed a 10-question needs assessment upon initial enrollment in Kaiser Permanente Colorado (KPCO), a not-for-profit integrated delivery system, through the Colorado State Individual Exchange. We used responses from the Brief Health Questionnaire (BHQ), to develop a predictive model for cost for receiving care in the top 25 percent, then applied cluster analytic techniques to identify different high-cost subpopulations. Per-member, per-month cost was measured from 6 to 12 months following BHQ response. BHQ responses significantly predictive of high-cost care included self-reported health status, functional limitations, medication use, presence of 0-4 chronic conditions, self-reported emergency department (ED) use during the prior year, and lack of prior insurance. Age, gender, and deductible-based insurance product were also predictive. The largest possible range of predicted probabilities of being in the top 25 percent of cost was 3.5 percent to 96.4 percent. Within the top cost quartile, examples of potentially actionable clusters of patients included those with high morbidity, prior utilization, depression risk and financial constraints; those with high morbidity, previously uninsured individuals with few financial constraints; and relatively healthy, previously insured individuals with medication needs. Applying sequential predictive modeling and cluster analytic techniques to patient-reported information can identify subgroups of individuals within heterogeneous populations who may benefit from specific interventions to optimize initial care delivery.
Optic disc segmentation for glaucoma screening system using fundus images.
Almazroa, Ahmed; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head pathologies such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of optic nerve head abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique was applied. As well an important contribution was to involve the variations in opinions among the ophthalmologists in detecting the disc boundaries and diagnosing the glaucoma. Most of the previous studies were trained and tested based on only one opinion, which can be assumed to be biased for the ophthalmologist. In addition, the accuracy was calculated based on the number of images that coincided with the ophthalmologists' agreed-upon images, and not only on the overlapping images as in previous studies. The ultimate goal of this project is to develop an automated image processing system for glaucoma screening. The disc algorithm is evaluated using a new retinal fundus image dataset called RIGA (retinal images for glaucoma analysis). In the case of low-quality images, a double level set was applied, in which the first level set was considered to be localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as the agreement among the manual markings of six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid was 83.9%, and the best agreement was observed between the results of the algorithm and manual markings in 379 images.
NASA Astrophysics Data System (ADS)
Tai, YuHeng; Chang, ChungPai
2015-04-01
Taiwan is one of the most active landslide areas in the world because of its high precipitation and active tectonic. Landslide, which destroys buildings and human lives, causes a lot of hazard and economical loss in the recent years. Jiufen, which have been determined as a creeping area with previous studies, is one of the famous tourist place in northern Taiwan. Therefore, detection and monitoring of landslide and creeping thus play an important role in risk management and help us decrease the damage from such mass movement. In this study, we apply Interferometric Synthetic Aperture Radar (InSAR) techniques at Jiufen area to monitor the creeping of slope. InSAR observations are obtained from ERS and ENVISAT, which were launched by European Space Agency, spaning from 1994 to 2008. Persistent Scatterer InSAR (PSInSAR) method is also applied to reduce the phase contributed from atmosphere and topography and help us get more precise measurement. We compare the result with previous studies carried out by fieldwork to confirm the possibility of InSAR techniques applying on landslide monitoring. Moreover, the time-series analysis helps us to understand the motion of the creeping along with time. After completion of some amelioration measures, time-series can illustrate the effect of these structures. Then, the result combining with fieldwork survey will give good suggestion of future remediation works. Furthermore, we estimate the measuring error and possible factors, such as slope direction, dip angle, etc., affecting InSAR result and. The result helps us to verify the reliability of this method and gives us more clear deformation pattern of the creeping area.
Usability-driven pruning of large ontologies: the case of SNOMED CT
Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan
2012-01-01
Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217
Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato
2015-03-08
The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.
Msaki, Peter; Padovani, Renato
2015-01-01
The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165
Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique
NASA Astrophysics Data System (ADS)
Järvinen, S. P.; Berdyugina, S. V.
2010-10-01
Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which affects the obtained temperature range of the map. This limitation is critical for small unresolved cool spots and is common for all DI techniques. Finally the LSD method was carried out for high signal-to-noise observations of the young active star V889 Her: the maps obtained with and without LSD are found to be consistent. Conclusions: Our new technique provides meaningful information on the temperature distribution on the stellar surfaces, which was previously inaccessible in DI with LSD. Our approach can be easily adopted for any other multi-line techniques.
UK audit of glomerular filtration rate measurement from plasma sampling in 2013.
Murray, Anthony W; Lawson, Richard S; Cade, Sarah C; Hall, David O; Kenny, Bob; O'Shaughnessy, Emma; Taylor, Jon; Towey, David; White, Duncan; Carson, Kathryn
2014-11-01
An audit was carried out into UK glomerular filtration rate (GFR) calculation. The results were compared with an identical 2001 audit. Participants used their routine method to calculate GFR for 20 data sets (four plasma samples) in millilitres per minute and also the GFR normalized for body surface area. Some unsound data sets were included to analyse the applied quality control (QC) methods. Variability between centres was assessed for each data set, compared with the national median and a reference value calculated using the method recommended in the British Nuclear Medicine Society guidelines. The influence of the number of samples on variability was studied. Supplementary data were requested on workload and methodology. The 59 returns showed widespread standardization. The applied early exponential clearance correction was the main contributor to the observed variability. These corrections were applied by 97% of centres (50% - 2001) with 80% using the recommended averaged Brochner-Mortenson correction. Approximately 75% applied the recommended Haycock body surface area formula for adults (78% for children). The effect of the number of samples used was not significant. There was wide variability in the applied QC techniques, especially in terms of the use of the volume of distribution. The widespread adoption of the guidelines has harmonized national GFR calculation compared with the previous audit. Further standardization could further reduce variability. This audit has highlighted the need to address the national standardization of QC methods. Radionuclide techniques are confirmed as the preferred method for GFR measurement when an unequivocal result is required.
NASA Technical Reports Server (NTRS)
Balasubramanian, R.; Norrie, D. H.; De Vries, G.
1979-01-01
Abel's integral equation is the governing equation for certain problems in physics and engineering, such as radiation from distributed sources. The finite element method for the solution of this non-linear equation is presented for problems with cylindrical symmetry and the extension to more general integral equations is indicated. The technique was applied to an axisymmetric glow discharge problem and the results show excellent agreement with previously obtained solutions
1983-07-01
tranverse lines, all of which overlapped test areas previously investigated by Technos. The lines were chosen to be representative of cavity areas and...cavities and may be con- sidered as competent rock for this site. It is interesting to note that amplitude perturbations do appear in the zone 95 to 100 ft...tunnels are man-made (regular in shape) and are in reasonably competent rock (not heavily fractured), the tunnel signature wiil be quite evident and
Finite-temperature Gutzwiller approximation from the time-dependent variational principle
NASA Astrophysics Data System (ADS)
Lanatà, Nicola; Deng, Xiaoyu; Kotliar, Gabriel
2015-08-01
We develop an extension of the Gutzwiller approximation to finite temperatures based on the Dirac-Frenkel variational principle. Our method does not rely on any entropy inequality, and is substantially more accurate than the approaches proposed in previous works. We apply our theory to the single-band Hubbard model at different fillings, and show that our results compare quantitatively well with dynamical mean field theory in the metallic phase. We discuss potential applications of our technique within the framework of first-principle calculations.
Some practical observations on the predictor jump method for solving the Laplace equation
NASA Astrophysics Data System (ADS)
Duque-Carrillo, J. F.; Vega-Fernández, J. M.; Peña-Bernal, J. J.; Rossell-Bueno, M. A.
1986-01-01
The best conditions for the application of the predictor jump (PJ) method in the solution of the Laplace equation are discussed and some practical considerations for applying this new iterative technique are presented. The PJ method was remarked on in a previous article entitled ``A new way for solving Laplace's problem (the predictor jump method)'' [J. M. Vega-Fernández, J. F. Duque-Carrillo, and J. J. Peña-Bernal, J. Math. Phys. 26, 416 (1985)].
Adjustment of Jacobs' formulation to the case of Mercury
NASA Astrophysics Data System (ADS)
Chiappini, M.; de Santis, A.
1991-04-01
Magnetic investigations play an important role in studies on the constitution of planetary interiors. One of these techniques (the so-called Jacobs' formulation), appropriately modified, has been applied to the case of Mercury. According to the results found, the planet, supposed to be divided internally as the earth (crust-mantle-core), would have a core/planet volume ratio of 28 percent, much greater than the earth's core percentage (16 percent). This result is in agreement with previous work which used other independent methods.
Diagnostic Molecular Microbiology: A 2018 Snapshot.
Fairfax, Marilynn Ransom; Bluth, Martin H; Salimnia, Hossein
2018-06-01
Molecular biological techniques have evolved expeditiously and in turn have been applied to the detection of infectious disease. Maturation of these technologies and their coupling with related technological advancement in fluorescence, electronics, digitization, nanodynamics, and sensors among others have afforded clinical medicine additional tools toward expedient identification of infectious organisms at concentrations and sensitivities previously unattainable. These advancements have been adapted in select settings toward addressing clinical demands for more timely and effective patient management. Copyright © 2018 Elsevier Inc. All rights reserved.
Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J
2015-06-01
Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.
A variable-gain output feedback control design methodology
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.
1989-01-01
A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.
Li, Bai; Lin, Mu; Liu, Qiao; Li, Ya; Zhou, Changjun
2015-10-01
Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding. Graphical Abstract Protein folding optimization using 3D off-lattice model and advanced optimization techniques.
The application of a shift theorem analysis technique to multipoint measurements
NASA Astrophysics Data System (ADS)
Dieckmann, M. E.; Chapman, S. C.
1999-03-01
A Fourier domain technique has been proposed previously which, in principle, quantifies the extent to which multipoint in-situ measurements can identify whether or not an observed structure is time stationary in its rest frame. Once a structure, sampled for example by four spacecraft, is shown to be quasi-stationary in its rest frame, the structure's velocity vector can be determined with respect to the sampling spacecraft. We investigate the properties of this technique, which we will refer to as a stationarity test, by applying it to two point measurements of a simulated boundary layer. The boundary layer was evolved using a PIC (particle in cell) electromagnetic code. Initial and boundary conditions were chosen such, that two cases could be considered, i.e. a spacecraft pair moving through (1) a time stationary boundary structure and (2) a boundary structure which is evolving (expanding) in time. The code also introduces noise in the simulated data time series which is uncorrelated between the two spacecraft. We demonstrate that, provided that the time series is Hanning windowed, the test is effective in determining the relative velocity between the boundary layer and spacecraft and in determining the range of frequencies over which the data can be treated as time stationary or time evolving. This work presents a first step towards understanding the effectiveness of this technique, as required in order for it to be applied to multispacecraft data.
Image processing analysis on the air-water slug two-phase flow in a horizontal pipe
NASA Astrophysics Data System (ADS)
Dinaryanto, Okto; Widyatama, Arif; Majid, Akmal Irfan; Deendarlianto, Indarto
2016-06-01
Slug flow is a part of intermittent flow which is avoided in industrial application because of its irregularity and high pressure fluctuation. Those characteristics cause some problems such as internal corrosion and the damage of the pipeline construction. In order to understand the slug characteristics, some of the measurement techniques can be applied such as wire-mesh sensors, CECM, and high speed camera. The present study was aimed to determine slug characteristics by using image processing techniques. Experiment has been carried out in 26 mm i.d. acrylic horizontal pipe with 9 m long. Air-water flow was recorded 5 m from the air-water mixer using high speed video camera. Each of image sequence was processed using MATLAB. There are some steps including image complement, background subtraction, and image filtering that used in this algorithm to produce binary images. Special treatments also were applied to reduce the disturbance effect of dispersed bubble around the bubble. Furthermore, binary images were used to describe bubble contour and calculate slug parameter such as gas slug length, gas slug velocity, and slug frequency. As a result the effect of superficial gas velocity and superficial liquid velocity on the fundamental parameters can be understood. After comparing the results to the previous experimental results, the image processing techniques is a useful and potential technique to explain the slug characteristics.
An automatic taxonomy of galaxy morphology using unsupervised machine learning
NASA Astrophysics Data System (ADS)
Hocking, Alex; Geach, James E.; Sun, Yi; Davey, Neil
2018-01-01
We present an unsupervised machine learning technique that automatically segments and labels galaxies in astronomical imaging surveys using only pixel data. Distinct from previous unsupervised machine learning approaches used in astronomy we use no pre-selection or pre-filtering of target galaxy type to identify galaxies that are similar. We demonstrate the technique on the Hubble Space Telescope (HST) Frontier Fields. By training the algorithm using galaxies from one field (Abell 2744) and applying the result to another (MACS 0416.1-2403), we show how the algorithm can cleanly separate early and late type galaxies without any form of pre-directed training for what an 'early' or 'late' type galaxy is. We then apply the technique to the HST Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) fields, creating a catalogue of approximately 60 000 classifications. We show how the automatic classification groups galaxies of similar morphological (and photometric) type and make the classifications public via a catalogue, a visual catalogue and galaxy similarity search. We compare the CANDELS machine-based classifications to human-classifications from the Galaxy Zoo: CANDELS project. Although there is not a direct mapping between Galaxy Zoo and our hierarchical labelling, we demonstrate a good level of concordance between human and machine classifications. Finally, we show how the technique can be used to identify rarer objects and present lensed galaxy candidates from the CANDELS imaging.
Dodds, James N; May, Jody C; McLean, John A
2017-11-21
Here we examine the relationship among resolving power (R p ), resolution (R pp ), and collision cross section (CCS) for compounds analyzed in previous ion mobility (IM) experiments representing a wide variety of instrument platforms and IM techniques. Our previous work indicated these three variables effectively describe and predict separation efficiency for drift tube ion mobility spectrometry experiments. In this work, we seek to determine if our previous findings are a general reflection of IM behavior that can be applied to various instrument platforms and mobility techniques. Results suggest IM distributions are well characterized by a Gaussian model and separation efficiency can be predicted on the basis of the empirical difference in the gas-phase CCS and a CCS-based resolving power definition (CCS/ΔCCS). Notably traveling wave (TWIMS) was found to operate at resolutions substantially higher than a single-peak resolving power suggested. When a CCS-based R p definition was utilized, TWIMS was found to operate at a resolving power between 40 and 50, confirming the previous observations by Giles and co-workers. After the separation axis (and corresponding resolving power) is converted to cross section space, it is possible to effectively predict separation behavior for all mobility techniques evaluated (i.e., uniform field, trapped ion mobility, traveling wave, cyclic, and overtone instruments) using the equations described in this work. Finally, we are able to establish for the first time that the current state-of-the-art ion mobility separations benchmark at a CCS-based resolving power of >300 that is sufficient to differentiate analyte ions with CCS differences as small as 0.5%.
Reductive Augmentation of the Breast.
Chasan, Paul E
2018-06-01
Although breast reduction surgery plays an invaluable role in the correction of macromastia, it almost always results in a breast lacking in upper pole fullness and/or roundness. We present a technique of breast reduction combined with augmentation termed "reductive augmentation" to solve this problem. The technique is also extremely useful for correcting breast asymmetry, as well as revising significant pseudoptosis in the patient who has previously undergone breast augmentation with or without mastopexy. An evolution of techniques has been used to create a breast with more upper pole fullness and anterior projection in those patients desiring a more round, higher-profile appearance. Reductive augmentation is a one-stage procedure in which a breast augmentation is immediately followed by a modified superomedial pedicle breast reduction. Often, the excision of breast tissue is greater than would normally be performed with breast reduction alone. Thirty-five patients underwent reductive augmentation, of which 12 were primary surgeries and 23 were revisions. There was an average tissue removal of 255 and 227 g, respectively, per breast for the primary and revision groups. Six of the reductive augmentations were performed for gross asymmetry. Fourteen patients had a previous mastopexy, and 3 patients had a previous breast reduction. The average follow-up was 26 months. Reductive augmentation is an effective one-stage method for achieving a more round-appearing breast with upper pole fullness both in primary breast reduction candidates and in revisionary breast surgery. This technique can also be applied to those patients with significant asymmetry. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
NASA Technical Reports Server (NTRS)
Glass, Christopher E.
2000-01-01
An uncoupled Computational Fluid Dynamics-Direct Simulation Monte Carlo (CFD-DSMC) technique is developed and applied to provide solutions for continuum jets interacting with rarefied external flows. The technique is based on a correlation of the appropriate Bird breakdown parameter for a transitional-rarefied condition that defines a surface within which the continuum solution is unaffected by the external flow-jet interaction. The method is applied to two problems to assess and demonstrate its validity; one of a jet interaction in the transitional-rarefied flow regime and the other in the moderately rarefied regime. Results show that the appropriate Bird breakdown surface for uncoupling the continuum and non-continuum solutions is a function of a non-dimensional parameter relating the momentum flux and collisionality between the two interacting flows. The correlation is exploited for the simulation of a jet interaction modeled for an experimental condition in the transitional-rarefied flow regime and the validity of the correlation is demonstrated. The uncoupled technique is also applied to an aerobraking flight condition for the Mars Global Surveyor spacecraft with attitude control system jet interaction. Aerodynamic yawing moment coefficients for cases without and with jet interaction at various angles-of-attack were predicted, and results from the present method compare well with values published previously. The flow field and surface properties are analyzed in some detail to describe the mechanism by which the jet interaction affects the aerodynamics.
Brassey, Charlotte A.; Gardiner, James D.
2015-01-01
Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559
Lin, Risa J; Jaeger, Dieter
2011-05-01
In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types.
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
NASA Astrophysics Data System (ADS)
San Roman, I.; Cenarro, A. J.; Díaz-García, L. A.; López-Sanjuan, C.; Varela, J.; González Delgado, R. M.; Sánchez-Blázquez, P.; Alfaro, E. J.; Ascaso, B.; Bonoli, S.; Borlaff, A.; Castander, F. J.; Cerviño, M.; Fernández-Soto, A.; Márquez, I.; Masegosa, J.; Muniesa, D.; Pović, M.; Viironen, K.; Aguerri, J. A. L.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Cepa, J.; Cristóbal-Hornillos, D.; Infante, L.; Martínez, V. J.; Moles, M.; del Olmo, A.; Perea, J.; Prada, F.; Quintana, J. M.
2018-01-01
We present a technique that permits the analysis of stellar population gradients in a relatively low-cost way compared to integral field unit (IFU) surveys. We developed a technique to analyze unresolved stellar populations of spatially resolved galaxies based on photometric multi-filter surveys. This technique allows the analysis of vastly larger samples and out to larger galactic radii. We derived spatially resolved stellar population properties and radial gradients by applying a centroidal Voronoi tessellation and performing a multicolor photometry spectral energy distribution fitting. This technique has been successfully applied to a sample of 29 massive (M⋆ > 1010.5M⊙) early-type galaxies at z < 0.3 from the ALHAMBRA survey. We produced detailed 2D maps of stellar population properties (age, metallicity, and extinction), which allow us to identify galactic features. Radial structures were studied, and luminosity-weighted and mass-weighted gradients were derived out to 2-3.5 Reff. We find that the spatially resolved stellar population mass, age, and metallicity are well represented by their integrated values. We find the gradients of early-type galaxies to be on average flat in age (∇log AgeL = 0.02 ± 0.06 dex/Reff) and negative in metallicity (∇[Fe/H]L = -0.09 ± 0.06 dex/Reff). Overall,the extinction gradients are flat (∇Av = -0.03 ± 0.09 mag/Reff ) with a wide spread. These results are in agreement with previous studies that used standard long-slit spectroscopy, and with the most recent IFU studies. According to recent simulations, these results are consistent with a scenario where early-type galaxies were formed through major mergers and where their final gradients are driven by the older ages and higher metallicity of the accreted systems. We demonstrate the scientific potential of multi-filter photometry to explore the spatially resolved stellar populations of local galaxies and confirm previous spectroscopic trends from a complementary technique. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC).
Zimmerman, Dawn M; Dew, Terry; Douglass, Michael; Perez, Edward
2010-02-01
To report successful femoral fracture repair in a polar bear. Case report. Female polar bear (Ursus maritimus) 5 years and approximately 250 kg. A closed, complete, comminuted fracture of the distal midshaft femur was successfully reduced and stabilized using a compression plating technique with 2 specialized human femur plates offering axial, rotational, and bending support, and allowing the bone to share loads with the implant. Postoperative radiographs were obtained at 11.5 weeks, 11 months, and 24 months. Bone healing characterized by marked periosteal reaction was evident at 11 months with extensive remodeling evident at 24 months. No complications were noted. Distal mid shaft femoral fracture was reduced, stabilized, and healed in an adult polar bear with a locking plate technique using 2 plates. Previously, femoral fractures in polar bears were considered irreparable. Use of 2 plates applied with a locking plate technique can result in successful fracture repair despite large body weight and inability to restrict postoperative activity.
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
Kaykhaii, Massoud; Linford, Matthew R
2017-03-04
Here, we discuss the newly developed micro and solventless sample preparation techniques SPME (Solid Phase Microextraction) and MESI (Membrane Extraction with a Sorbent Interface) as applied to the qualitative and quantitative analysis of thermal oxidative degradation products of polymers and their stabilizers. The coupling of these systems to analytical instruments is also described. Our comprehensive literature search revealed that there is no previously published review article on this topic. It is shown that these extraction techniques are valuable sample preparation tools for identifying complex series of degradation products in polymers. In general, the number of products identified by traditional headspace (HS-GC-MS) is much lower than with SPME-GC-MS. MESI is particularly well suited for the detection of non-polar compounds, therefore number of products identified by this technique is not also to the same degree of SPME. Its main advantage, however, is its ability of (semi-) continuous monitoring, but it is more expensive and not yet commercialized.
NASA Technical Reports Server (NTRS)
Landgrebe, D.
1974-01-01
A broad study is described to evaluate a set of machine analysis and processing techniques applied to ERTS-1 data. Based on the analysis results in urban land use analysis and soil association mapping together with previously reported results in general earth surface feature identification and crop species classification, a profile of general applicability of this procedure is beginning to emerge. Put in the hands of a user who knows well the information needed from the data and also is familiar with the region to be analyzed it appears that significantly useful information can be generated by these methods. When supported by preprocessing techniques such as the geometric correction and temporal registration capabilities, final products readily useable by user agencies appear possible. In parallel with application, through further research, there is much potential for further development of these techniques both with regard to providing higher performance and in new situations not yet studied.
100 Most Influential Publications in Scoliosis Surgery.
Zhou, James Jun; Koltz, Michael T; Agarwal, Nitin; Tempel, Zachary J; Kanter, Adam S; Okonkwo, David O; Hamilton, D Kojo
2017-03-01
Bibliometric analysis. To apply the established technique of citation analysis to identify the 100 most influential articles in scoliosis surgery research published between 1900 and 2015. Previous studies have applied the technique of citation analysis to other areas of study. This is the first article to apply this technique to the field of scoliosis surgery. A two-step search of the Thomson Reuters Web of Science was conducted to identify all articles relevant to the field of scoliosis surgery. The top 100 articles with the most citations were identified based on analysis of titles and abstracts. Further statistical analysis was conducted to determine whether measures of author reputation and overall publication influence affected the rate at which publications were recognized and incorporated by other researchers in the field. Total citations for the final 100 publications included in the list ranged from 82 to 509. The period for publication ranged from 1954 to 2010. Most studies were published in the journal Spine (n = 63). The most frequently published topics of study were surgical techniques (n = 35) and outcomes (n = 35). Measures of author reputation (number of total studies in the top 100, number of first-author studies in the top 100) were found to have no effect on the rate at which studies were adopted by other researchers (number of years until first citation, and number of years until maximum citations). The number of citations/year a publication received was found to be negatively correlated with the rate at which it was adopted by other researchers, indicating that more influential manuscripts attained more rapid recognition by the scientific community at large. In assembling this publication, we have strived to identify and recognize the 100 most influential articles in scoliosis surgery research from 1900 to 2015. N/A.
Symmetrical Taylor impact of glass bars
NASA Astrophysics Data System (ADS)
Murray, N. H.; Bourne, N. K.; Field, J. E.; Rosenberg, Z.
1998-07-01
Brar and Bless pioneered the use of plate impact upon bars as a technique for investigating the 1D stress loading of glass but limited their studies to relatively modest stresses (1). We wish to extend this technique by applying VISAR and embedded stress gauge measurements to a symmetrical version of the test in which two rods impact one upon the other. Previous work in the laboratory has characterised the glass types (soda-lime and borosilicate)(2). These experiments identify the failure mechanisms from high-speed photography and the stress and particle velocity histories are interpreted in the light of these results. The differences in response of the glasses and the relation of the fracture to the failure wave in uniaxial strain are discussed.
NASA Astrophysics Data System (ADS)
Zhou, Hongfu; Gang, Yadong; Chen, Shenghua; Wang, Yu; Xiong, Yumiao; Li, Longhui; Yin, Fangfang; Liu, Yue; Liu, Xiuli; Zeng, Shaoqun
2017-10-01
Plastic embedding is widely applied in light microscopy analyses. Previous studies have shown that embedding agents and related techniques can greatly affect the quality of biological tissue embedding and fluorescent imaging. Specifically, it is difficult to preserve endogenous fluorescence using currently available acidic commercial embedding resins and related embedding techniques directly. Here, we developed a neutral embedding resin that improved the green fluorescent protein (GFP), yellow fluorescent protein (YFP), and DsRed fluorescent intensity without adjusting the pH value of monomers or reactivating fluorescence in lye. The embedding resin had a high degree of polymerization, and its fluorescence preservation ratios for GFP, YFP, and DsRed were 126.5%, 155.8%, and 218.4%, respectively.
Arc-Welding Spectroscopic Monitoring based on Feature Selection and Neural Networks.
Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M
2008-10-21
A new spectral processing technique designed for application in the on-line detection and classification of arc-welding defects is presented in this paper. A noninvasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed in two consecutive stages. A compression algorithm is first applied to the data, allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in previous works, giving rise to an improvement in the performance of the monitoring system.
Longenecker, R J; Galazyuk, A V
2012-11-16
Recently prepulse inhibition of the acoustic startle reflex (ASR) became a popular technique for tinnitus assessment in laboratory animals. This method confers a significant advantage over the previously used time-consuming behavioral approaches utilizing basic mechanisms of conditioning. Although this technique has been successfully used to assess tinnitus in different laboratory animals, many of the finer details of this methodology have not been described enough to be replicated, but are critical for tinnitus assessment. Here we provide detail description of key procedures and methodological issues that provide guidance for newcomers with the process of learning to correctly apply gap detection techniques for tinnitus assessment in laboratory animals. The major categories of these issues include: refinement of hardware for best performance, optimization of stimulus parameters, behavioral considerations, and identification of optimal strategies for data analysis. This article is part of a Special Issue entitled: Tinnitus Neuroscience. Copyright © 2012. Published by Elsevier B.V.
Portable XRF and principal component analysis for bill characterization in forensic science.
Appoloni, C R; Melquiades, F L
2014-02-01
Several modern techniques have been applied to prevent counterfeiting of money bills. The objective of this study was to demonstrate the potential of Portable X-ray Fluorescence (PXRF) technique and the multivariate analysis method of Principal Component Analysis (PCA) for classification of bills in order to use it in forensic science. Bills of Dollar, Euro and Real (Brazilian currency) were measured directly at different colored regions, without any previous preparation. Spectra interpretation allowed the identification of Ca, Ti, Fe, Cu, Sr, Y, Zr and Pb. PCA analysis separated the bills in three groups and subgroups among Brazilian currency. In conclusion, the samples were classified according to its origin identifying the elements responsible for differentiation and basic pigment composition. PXRF allied to multivariate discriminate methods is a promising technique for rapid and no destructive identification of false bills in forensic science. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Counselman, C.C. III
1973-09-01
Very-long-baseline interferometry (VLBI) techniques have already been used to determine the vector separations between antennas thousands of kilometers apart to within 2 m and the directions of extragalactic radio sources to 0.1'', and to track an artificial satellite of the earth and the Apollo Lunar Rover on the surface of the Moon. The relative loostions of the Apollo Lunar Surface Experiment Package (ALSEP) transmitters on the lunar surface are being measured within 1 m, and the Moon's libration is being messured to 1'' of selenocentric src. Attempts are under way to measure the solar gravitational deflection of radio waves moremore » accurately than previously possible, by means of VLBI. A wide variety of scientific problems is being attacked by VLBI techniques, which may soon be two orders of magnitude more accurate than at present. (auth)« less
Wire Crimp Connectors Verification using Ultrasonic Inspection
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Perey, Daniel F.; Yost, William T.
2007-01-01
The development of a new ultrasonic measurement technique to quantitatively assess wire crimp connections is discussed. The amplitude change of a compressional ultrasonic wave propagating through the junction of a crimp connector and wire is shown to correlate with the results of a destructive pull test, which previously has been used to assess crimp wire junction quality. Various crimp junction pathologies (missing wire strands, incorrect wire gauge, incomplete wire insertion in connector) are ultrasonically tested, and their results are correlated with pull tests. Results show that the ultrasonic measurement technique consistently (as evidenced with pull-testing data) predicts good crimps when ultrasonic transmission is above a certain threshold amplitude level. A physics-based model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying the technique while wire crimps are installed is also presented.
NASA Technical Reports Server (NTRS)
Moses, J. Daniel
1989-01-01
Three improvements in photographic x-ray imaging techniques for solar astronomy are presented. The testing and calibration of a new film processor was conducted; the resulting product will allow photometric development of sounding rocket flight film immediately upon recovery at the missile range. Two fine grained photographic films were calibrated and flight tested to provide alternative detector choices when the need for high resolution is greater than the need for high sensitivity. An analysis technique used to obtain the characteristic curve directly from photographs of UV solar spectra were applied to the analysis of soft x-ray photographic images. The resulting procedure provides a more complete and straightforward determination of the parameters describing the x-ray characteristic curve than previous techniques. These improvements fall into the category of refinements instead of revolutions, indicating the fundamental suitability of the photographic process for x-ray imaging in solar astronomy.
Optimization of chiral structures for microscale propulsion.
Keaveny, Eric E; Walker, Shawn W; Shelley, Michael J
2013-02-13
Recent advances in micro- and nanoscale fabrication techniques allow for the construction of rigid, helically shaped microswimmers that can be actuated using applied magnetic fields. These swimmers represent the first steps toward the development of microrobots for targeted drug delivery and minimally invasive surgical procedures. To assess the performance of these devices and improve on their design, we perform shape optimization computations to determine swimmer geometries that maximize speed in the direction of a given applied magnetic torque. We directly assess aspects of swimmer shapes that have been developed in previous experimental studies, including helical propellers with elongated cross sections and attached payloads. From these optimizations, we identify key improvements to existing designs that result in swimming speeds that are 70-470% of their original values.
Vibration suppression in flexible structures via the sliding-mode control approach
NASA Technical Reports Server (NTRS)
Drakunov, S.; Oezguener, Uemit
1994-01-01
Sliding mode control became very popular recently because it makes the closed loop system highly insensitive to external disturbances and parameter variations. Sliding algorithms for flexible structures have been used previously, but these were based on finite-dimensional models. An extension of this approach for differential-difference systems is obtained. That makes if possible to apply sliding-mode control algorithms to the variety of nondispersive flexible structures which can be described as differential-difference systems. The main idea of using this technique for dispersive structures is to reduce the order of the controlled part of the system by applying an integral transformation. We can say that transformation 'absorbs' the dispersive properties of the flexible structure as the controlled part becomes dispersive.
Time-Series Analysis: A Cautionary Tale
NASA Technical Reports Server (NTRS)
Damadeo, Robert
2015-01-01
Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.
Applications of Advanced, Waveform Based AE Techniques for Testing Composite Materials
NASA Technical Reports Server (NTRS)
Prosser, William H.
1996-01-01
Advanced, waveform based acoustic emission (AE) techniques have been previously used to evaluate damage progression in laboratory tests of composite coupons. In these tests, broad band, high fidelity acoustic sensors were used to detect signals which were then digitized and stored for analysis. Analysis techniques were based on plate mode wave propagation characteristics. This approach, more recently referred to as Modal AE, provides an enhanced capability to discriminate and eliminate noise signals from those generated by damage mechanisms. This technique also allows much more precise source location than conventional, threshold crossing arrival time determination techniques. To apply Modal AE concepts to the interpretation of AE on larger composite structures, the effects of wave propagation over larger distances and through structural complexities must be well characterized and understood. In this research, measurements were made of the attenuation of the extensional and flexural plate mode components of broad band simulated AE signals in large composite panels. As these materials have applications in a cryogenic environment, the effects of cryogenic insulation on the attenuation of plate mode AE signals were also documented.
Conceptual recurrence plots: revealing patterns in human discourse.
Angus, Daniel; Smith, Andrew; Wiles, Janet
2012-06-01
Human discourse contains a rich mixture of conceptual information. Visualization of the global and local patterns within this data stream is a complex and challenging problem. Recurrence plots are an information visualization technique that can reveal trends and features in complex time series data. The recurrence plot technique works by measuring the similarity of points in a time series to all other points in the same time series and plotting the results in two dimensions. Previous studies have applied recurrence plotting techniques to textual data; however, these approaches plot recurrence using term-based similarity rather than conceptual similarity of the text. We introduce conceptual recurrence plots, which use a model of language to measure similarity between pairs of text utterances, and the similarity of all utterances is measured and displayed. In this paper, we explore how the descriptive power of the recurrence plotting technique can be used to discover patterns of interaction across a series of conversation transcripts. The results suggest that the conceptual recurrence plotting technique is a useful tool for exploring the structure of human discourse.
An efficient and accurate molecular alignment and docking technique using ab initio quality scoring
Füsti-Molnár, László; Merz, Kenneth M.
2008-01-01
An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
A Lagrangian meshfree method applied to linear and nonlinear elasticity
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443
A processing architecture for associative short-term memory in electronic noses
NASA Astrophysics Data System (ADS)
Pioggia, G.; Ferro, M.; Di Francesco, F.; DeRossi, D.
2006-11-01
Electronic nose (e-nose) architectures usually consist of several modules that process various tasks such as control, data acquisition, data filtering, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, and fuzzy rules used to implement such tasks may lead to issues concerning module interconnection and cooperation. Moreover, a new learning phase is mandatory once new measurements have been added to the dataset, thus causing changes in the previously derived model. Consequently, if a loss in the previous learning occurs (catastrophic interference), real-time applications of e-noses are limited. To overcome these problems this paper presents an architecture for dynamic and efficient management of multi-transducer data processing techniques and for saving an associative short-term memory of the previously learned model. The architecture implements an artificial model of a hippocampus-based working memory, enabling the system to be ready for real-time applications. Starting from the base models available in the architecture core, dedicated models for neurons, maps and connections were tailored to an artificial olfactory system devoted to analysing olive oil. In order to verify the ability of the processing architecture in associative and short-term memory, a paired-associate learning test was applied. The avoidance of catastrophic interference was observed.
Rifai, Damhuji; Abdalla, Ahmed N.; Ali, Kharudin; Razali, Ramdan
2016-01-01
Non-destructive eddy current testing (ECT) is widely used to examine structural defects in ferromagnetic pipe in the oil and gas industry. Implementation of giant magnetoresistance (GMR) sensors as magnetic field sensors to detect the changes of magnetic field continuity have increased the sensitivity of eddy current techniques in detecting the material defect profile. However, not many researchers have described in detail the structure and issues of GMR sensors and their application in eddy current techniques for nondestructive testing. This paper will describe the implementation of GMR sensors in non-destructive testing eddy current testing. The first part of this paper will describe the structure and principles of GMR sensors. The second part outlines the principles and types of eddy current testing probe that have been studied and developed by previous researchers. The influence of various parameters on the GMR measurement and a factor affecting in eddy current testing will be described in detail in the third part of this paper. Finally, this paper will discuss the limitations of coil probe and compensation techniques that researchers have applied in eddy current testing probes. A comprehensive review of previous studies on the application of GMR sensors in non-destructive eddy current testing also be given at the end of this paper. PMID:26927123
Neural mechanisms of cue-approach training
Bakkour, Akram; Lewis-Peacock, Jarrod A.; Poldrack, Russell A.; Schonberg, Tom
2016-01-01
Biasing choices may prove a useful way to implement behavior change. Previous work has shown that a simple training task (the cue-approach task), which does not rely on external reinforcement, can robustly influence choice behavior by biasing choice toward items that were targeted during training. In the current study, we replicate previous behavioral findings and explore the neural mechanisms underlying the shift in preferences following cue-approach training. Given recent successes in the development and application of machine learning techniques to task-based fMRI data, which have advanced understanding of the neural substrates of cognition, we sought to leverage the power of these techniques to better understand neural changes during cue-approach training that subsequently led to a shift in choice behavior. Contrary to our expectations, we found that machine learning techniques applied to fMRI data during non-reinforced training were unsuccessful in elucidating the neural mechanism underlying the behavioral effect. However, univariate analyses during training revealed that the relationship between BOLD and choices for Go items increases as training progresses compared to choices of NoGo items primarily in lateral prefrontal cortical areas. This new imaging finding suggests that preferences are shifted via differential engagement of task control networks that interact with value networks during cue-approach training. PMID:27677231
Rifai, Damhuji; Abdalla, Ahmed N; Ali, Kharudin; Razali, Ramdan
2016-02-26
Non-destructive eddy current testing (ECT) is widely used to examine structural defects in ferromagnetic pipe in the oil and gas industry. Implementation of giant magnetoresistance (GMR) sensors as magnetic field sensors to detect the changes of magnetic field continuity have increased the sensitivity of eddy current techniques in detecting the material defect profile. However, not many researchers have described in detail the structure and issues of GMR sensors and their application in eddy current techniques for nondestructive testing. This paper will describe the implementation of GMR sensors in non-destructive testing eddy current testing. The first part of this paper will describe the structure and principles of GMR sensors. The second part outlines the principles and types of eddy current testing probe that have been studied and developed by previous researchers. The influence of various parameters on the GMR measurement and a factor affecting in eddy current testing will be described in detail in the third part of this paper. Finally, this paper will discuss the limitations of coil probe and compensation techniques that researchers have applied in eddy current testing probes. A comprehensive review of previous studies on the application of GMR sensors in non-destructive eddy current testing also be given at the end of this paper.
All-optical technique for measuring thermal properties of materials at static high pressure
NASA Astrophysics Data System (ADS)
Pangilinan, G. I.; Ladouceur, H. D.; Russell, T. P.
2000-10-01
The development and implementation of an all-optical technique for measuring thermal transport properties of materials at high pressure in a gem anvil cell are reported. Thermal transport properties are determined by propagating a thermal wave in a material subjected to high pressures, and measuring the temperature as a function of time using an optical sensor embedded downstream in the material. Optical beams are used to deposit energy and to measure the sensor temperature and replace the resistive heat source and the thermocouples of previous methods. This overcomes the problems introduced with pressure-induced resistance changes and the spatial limitations inherent in previous high-pressure experimentation. Consistent with the heat conduction equation, the material's specific heat, thermal conductivity, and thermal diffusivity (κ) determine the sensor's temperature rise and its temporal profile. The all-optical technique described focuses on room-temperature thermal properties but can easily be applied to a wide temperature range (77-600 K). Measurements of thermal transport properties at pressure up to 2.0 GPa are reported, although extension to much higher pressures are feasible. The thermal properties of NaCl, a commonly used material for high-pressure experiments are measured and shown to be consistent with those obtained using the traditional methods.
Sogutmaz Ozdemir, Bahar; Budak, Hikmet
2018-01-01
Brachypodium distachyon has recently emerged as a model plant species for the grass family (Poaceae) that includes major cereal crops and forage grasses. One of the important traits of a model species is its capacity to be transformed and ease of growing both in tissue culture and in greenhouse conditions. Hence, plant transformation technology is crucial for improvements in agricultural studies, both for the study of new genes and in the production of new transgenic plant species. In this chapter, we review an efficient tissue culture and two different transformation systems for Brachypodium using most commonly preferred gene transfer techniques in plant species, microprojectile bombardment method (biolistics) and Agrobacterium-mediated transformation.In plant transformation studies, frequently used explant materials are immature embryos due to their higher transformation efficiencies and regeneration capacity. However, mature embryos are available throughout the year in contrast to immature embryos. We explain a tissue culture protocol for Brachypodium using mature embryos with the selected inbred lines from our collection. Embryogenic calluses obtained from mature embryos are used to transform Brachypodium with both plant transformation techniques that are revised according to previously studied protocols applied in the grasses, such as applying vacuum infiltration, different wounding effects, modification in inoculation and cocultivation steps or optimization of bombardment parameters.
Hiromitsu, Shirasawa; Jin, Kumagai; Emiko, Sato; Katsuya, Kabashima; Yukiyo, Kumazawa; Wataru, Sato; Hiroshi, Miura; Ryuta, Nakamura; Hiroshi, Nanjo; Yoshihiro, Minamiya; Yoichi, Akagami; Yukihiro, Terada
2015-01-01
Recently, a new technique was developed for non-catalytically mixing microdroplets. In this method, an alternating-current (AC) electric field is used to promote the antigen–antibody reaction within the microdroplet. Previously, this technique has only been applied to histological examinations of flat structures, such as surgical specimens. In this study, we applied this technique for the first time to immunofluorescence staining of three-dimensional structures, specifically, mammalian eggs. We diluted an antibody against microtubules from 1:1,000 to 1:16,000, and compared the chromatic degree and extent of fading across dilutions. In addition, we varied the frequency of AC electric-field mixing from 5 Hz to 46 Hz and evaluated the effect on microtubule staining. Microtubules were more strongly stained after AC electric-field mixing for only 5 minutes, even when the concentration of primary antibody was 10 times lower than in conventional methods. AC electric-field mixing also alleviated microtubule fading. At all frequencies tested, AC electric-field mixing resulted in stronger microtubule staining than in controls. There was no clear difference in a microtubule staining between frequencies. These results suggest that the novel method could reduce antibody consumption and shorten immunofluorescence staining time. PMID:26477850
Higher Signal-to-Noise Measurements of Alpha-element Abundances in the M31 System
NASA Astrophysics Data System (ADS)
Escala, Ivanna; Kirby, Evan N.
2018-06-01
The stellar halo and tidal streams of M31 provide an essential counterpoint to the same structures around the Milky Way (MW). While measurements of [Fe/H] and [$\\alpha$/Fe] have been made in the MW, little is known about the detailed chemical abundances of the M31 system. To make progress with existing telescopes, we expand upon the technique first presented by Kirby et al., applying spectral synthesis to medium-resolution spectroscopy at lower spectral resolution (R $\\sim$ 1800) across an optical range (4100~\\AA \\ $<$ $\\lambda$ $<$ 9100~\\AA) that extends down the blue. We have obtained deep spectra of red giants in the tidal streams, smooth halo, and disk of M31 using the DEIMOS 600ZD grating, resulting in higher signal-to-noise per spectral resolution element (S/N $\\sim$ 30 \\AA$^{-1}$). By applying our technique to red giant stars in MW globular clusters with higher-resolution ($R$ $\\sim$ 6000) spectra in the blue (4100 - 6300 \\AA), we demonstrate that our technique reproduces previous measurements derived from the red side of the optical (6300 - 9100 \\AA). For the first time, we present measurements of [Fe/H] and [$\\alpha$/Fe] of sufficient quality and sample size to construct quantitative models of galactic chemical evolution in the M31 system.
Quantum state sharing against the controller's cheating
NASA Astrophysics Data System (ADS)
Shi, Run-hua; Zhong, Hong; Huang, Liu-sheng
2013-08-01
Most existing QSTS schemes are equivalent to the controlled teleportation, in which a designated agent (i.e., the recoverer) can recover the teleported state with the help of the controllers. However, the controller may attempt to cheat the recoverer during the phase of recovering the secret state. How can we detect this cheating? In this paper, we considered the problem of detecting the controller's cheating in Quantum State Sharing, and further proposed an effective Quantum State Sharing scheme against the controller's cheating. We cleverly use Quantum Secret Sharing, Multiple Quantum States Sharing and decoy-particle techniques. In our scheme, via a previously shared entanglement state Alice can teleport multiple arbitrary multi-qubit states to Bob with the help of Charlie. Furthermore, by the classical information shared previously, Alice and Bob can check whether there is any cheating of Charlie. In addition, our scheme only needs to perform Bell-state and single-particle measurements, and to apply C-NOT gate and other single-particle unitary operations. With the present techniques, it is feasible to implement these necessary measurements and operations.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Transition zone structure beneath Ethiopia from 3-D fast marching pseudo-migration stacking
NASA Astrophysics Data System (ADS)
Benoit, M. H.; Lopez, A.; Levin, V.
2008-12-01
Several models for the origin of the Afar hotspot have been put forth over the last decade, but much ambiguity remains as to whether the hotspot tectonism found there is due to a shallow or deeply seated feature. Additionally, there has been much debate as to whether the hotspot owes its existence to a 'classic' mantle plume feature or if it is part of the African Superplume complex. To further understand the origin of the hotspot, we employ a new receiver function stacking method that incorporates a fast-marching three- dimensional ray tracing algorithm to improve upon existing studies of the mantle transition zone structure. Using teleseismic data from the Ethiopia Broadband Seismic Experiment and the EAGLE (Ethiopia Afar Grand Lithospheric Experiment) experiment, we stack receiver functions using a three-dimensional pseudo- migration technique to examine topography on the 410 and 660 km discontinuities. Previous methods of receiver function pseudo-migration incorporated ray tracing methods that were not able to ray trace through highly complicated 3-D structure, or the ray tracing techniques only produced 3-D time perturbations associated 1-D rays in a 3-D velocity medium. These previous techniques yielded confusing and incomplete results for when applied to the exceedingly complicated mantle structure beneath Ethiopia. Indeed, comparisons of the 1-D versus 3-D ray tracing techniques show that the 1-D technique mislocated structure laterally in the mantle by over 100 km. Preliminary results using our new technique show a shallower then average 410 km discontinuity and a deeper than average 660 km discontinuity over much of the region, suggested that the hotspot has a deep seated origin.
Maltseva, Elena; Shapovalov, Vladimir L; Möhwald, Helmuth; Brezesinski, Gerald
2006-01-19
Phosphatidylglycerols are components of biological membranes. The phase behavior of these phospholipids was extensively investigated. However, there is still no definite picture about the dependence of the ionization state and monolayer structure on subphase composition. The major problem of previous investigations is that none of the methods used allow obtaining the ionization degree directly. In the present work we apply techniques developed in the past decades for Langmuir monolayers: infrared reflection absorption spectroscopy (IRRAS) as well as X-ray diffraction and reflectivity techniques, which provide straightforward information about structure and ionization state of a L-1,2-dipalmitoylphosphatidylglycerol (DPPG) monolayer. The Gouy-Chapman model is applied to evaluate the intrinsic pKa. Therewith, the ionization degree can be determined even at low pH values. The experimental titration curves are in good agreement with theoretical curves based on the Gouy-Chapman model. The obtained instrinic pKa amounts to 1. The ionization degree of a DPPG monolayer is independent of the monovalent cation size. In contrast, the structure of a DPPG monolayer is strongly affected by the type of divalent cations.
Breast tumor malignancy modelling using evolutionary neural logic networks.
Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia
2006-01-01
The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.
Painless needle insertion in regional anesthesia of the eye.
Vaalamo, M O; Paloheimo, M P; Nikki, P H
1995-04-01
We examined a new technique of applying topical anesthetic with cotton tip sticks to the conjunctiva before needle insertion in regional anesthesia of the eye. Oxybuprocaine 0.4% and lidocaine 4% were compared with balanced salt solution (BSS) as topical anesthetics of the conjunctiva in Study 1. Ninety patients were randomly assigned into three groups (n = 30) to receive one of the three topical anesthetics in a double-blind manner. Pain of the needle insertions was measured with visual analog scale score (VAS) and quantitative surface electromography (qEMG). Both oxybuprocaine and lidocaine reduced pain significantly when compared to BSS. In Study 2, with healthy volunteers, we compared our previous practice of merely applying three consecutive drops of oxybuprocaine on the conjunctiva before needle insertions to the new technique of placing additional cotton tip sticks soaked in oxybuprocaine on the conjunctiva. We found the needle insertion virtually pain free when the cotton tip sticks were added to the topical anesthesia. The use of this simple method of topical anesthesia before the eye block increases patient comfort significantly.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Computational intelligence techniques for biological data mining: An overview
NASA Astrophysics Data System (ADS)
Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari
2014-10-01
Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.
NASA Astrophysics Data System (ADS)
Hall Barbosa, C.
2004-06-01
A technique had been previously developed, based on magnetic field measurements using a superconducting quantum interference device sensor, to localize in three dimensions steel needles lost in the human body. In all six cases that were treated until now, the technique allowed easy surgical localization of the needles with high accuracy. The technique decreases, by a large factor, the surgery time for foreign body extraction, and also reduces the generally high odds of failure. The method is accurate, noninvasive, and innocuous, and with clear clinical importance. Despite the importance of needle localization, the most prevalent foreign body in the modern society is the firearm projectile (bullet), generally composed of lead, a paramagnetic material, thus not presenting a remanent magnetic field as steel needles do. On the other hand, since lead is a good conductor, eddy current detection techniques can be employed, by applying an alternating magnetic field with the aid of excitation coils. The primary field induces eddy currents on the lead, which in turn generate a secondary magnetic field that can be detected by a magnetometer, and give information about position and volume of the conducting foreign body. In this article we present a theoretical study for the development of a localization technique for lead bullets inside the human body. Initially, we present a model for the secondary magnetic field generated by the bullet, given a known applied field. After that, we study possible excitation systems, and propose a localization algorithm based on the detected magnetic field.
Ağır, İsmail; Aytekin, Mahmut Nedim; Başçı, Onur; Çaypınar, Barış; Erol, Bülent
2014-01-01
Background: Two main factors determine the strength of tendon repair; the tensile strength of material and the gripping capacity of a suture configuration. Different repair techniques and suture materials were developed to increase the strength of repairs but none of techniques and suture materials seem to provide enough tensile strength with safety margins for early active mobilization. In order to overcome this problem tendon suturing implants are being developed. We designed two different suturing implants. The aim of this study was to measure tendon-holding capacities of these implants biomechanically and to compare them with frequently used suture techniques Materials and Methods: In this study we used 64 sheep flexor digitorum profundus tendons. Four study groups were formed and each group had 16 tendons. We applied model 1 and model 2 implant to the first 2 groups and Bunnell and locking-loop techniques to the 3rd and 4th groups respectively by using 5 Ticron sutures. Results: In 13 tendons in group 1 and 15 tendons in group 2 and in all tendons in group 3 and 4, implants and sutures pulled out of the tendon in longitudinal axis at the point of maximum load. The mean tensile strengths were the largest in group 1 and smallest in group 3. Conclusion: In conclusion, the new stainless steel tendon suturing implants applied from outside the tendons using steel wires enable a biomechanically stronger repair with less tendon trauma when compared to previously developed tendon repair implants and the traditional suturing techniques. PMID:25067965
Refinement of pressure calibration for multi-anvil press experiments
NASA Astrophysics Data System (ADS)
Ono, S.
2016-12-01
Accurate characterization of the pressure and temperature environment in high-pressure apparatuses is of essential importance when we apply laboratory data to the study of the Earth's interior. Recently, the synchrotron X-ray source can be used for the high-pressure experiments, and the in situ pressure calibration has been a common technique. However, this technique cannot be used in the laboratory-based experiments. Even now, the conventional pressure calibration is of great interest to understand the Earth's interior. Several high-pressure phase transitions used as the pressure calibrants in the laboratory-based multi-anvil experiments have been investigated. Precise determinations of phase boundaries of CaGeO3 [1], Fe2SiO4 [2], SiO2, and Zr [3] were performed by the multi-anvil press or the diamond anvil cell apparatuses combined with the synchrotron X-ray diffraction technique. The transition pressures in CaGeO3 (garnet-perovskite), Fe2SiO4 (alfa-gamma), and SiO2 (coesite-stishovite) were in general agreement with those reported by previous studies. However, significant discrepancies for the slopes, dP/dT, of these transitions between our and previous studies were confirmed. In the case of Zr study [3], our experimental results elucidate the inconsistency in the transition pressure between omega and beta phase in Zr observed in previous studies. [1] Ono et al. (2011) Phys. Chem. Minerals, 38, 735-740.[2] Ono et al. (2013) Phys. Chem. Minerals, 40, 811-816.[3] Ono & Kikegawa (2015) J. Solid State Chem., 225, 110-113.
Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.
Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W
2017-08-01
Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.
NASA Astrophysics Data System (ADS)
Sahu, Indra D.; Hustedt, Eric J.; Ghimire, Harishchandra; Inbaraj, Johnson J.; McCarrick, Robert M.; Lorigan, Gary A.
2014-12-01
An EPR membrane alignment technique was applied to measure distance and relative orientations between two spin labels on a protein oriented along the surface of the membrane. Previously we demonstrated an EPR membrane alignment technique for measuring distances and relative orientations between two spin labels using a dual TOAC-labeled integral transmembrane peptide (M2δ segment of Acetylcholine receptor) as a test system. In this study we further utilized this technique and successfully measured the distance and relative orientations between two spin labels on a membrane peripheral peptide (antimicrobial peptide magainin-2). The TOAC-labeled magainin-2 peptides were mechanically aligned using DMPC lipids on a planar quartz support, and CW-EPR spectra were recorded at specific orientations. Global analysis in combination with rigorous spectral simulation was used to simultaneously analyze data from two different sample orientations for both single- and double-labeled peptides. We measured an internitroxide distance of 15.3 Å from a dual TOAC-labeled magainin-2 peptide at positions 8 and 14 that closely matches with the 13.3 Å distance obtained from a model of the labeled magainin peptide. In addition, the angles determining the relative orientations of the two nitroxides have been determined, and the results compare favorably with molecular modeling. This study demonstrates the utility of the technique for proteins oriented along the surface of the membrane in addition to the previous results for proteins situated within the membrane bilayer.
Rusterholz, Thomas; Achermann, Peter; Dürr, Roland; Koenig, Thomas; Tarokh, Leila
2017-06-01
Investigating functional connectivity between brain networks has become an area of interest in neuroscience. Several methods for investigating connectivity have recently been developed, however, these techniques need to be applied with care. We demonstrate that global field synchronization (GFS), a global measure of phase alignment in the EEG as a function of frequency, must be applied considering signal processing principles in order to yield valid results. Multichannel EEG (27 derivations) was analyzed for GFS based on the complex spectrum derived by the fast Fourier transform (FFT). We examined the effect of window functions on GFS, in particular of non-rectangular windows. Applying a rectangular window when calculating the FFT revealed high GFS values for high frequencies (>15Hz) that were highly correlated (r=0.9) with spectral power in the lower frequency range (0.75-4.5Hz) and tracked the depth of sleep. This turned out to be spurious synchronization. With a non-rectangular window (Tukey or Hanning window) these high frequency synchronization vanished. Both, GFS and power density spectra significantly differed for rectangular and non-rectangular windows. Previous papers using GFS typically did not specify the applied window and may have used a rectangular window function. However, the demonstrated impact of the window function raises the question of the validity of some previous findings at higher frequencies. We demonstrated that it is crucial to apply an appropriate window function for determining synchronization measures based on a spectral approach to avoid spurious synchronization in the beta/gamma range. Copyright © 2017 Elsevier B.V. All rights reserved.
Statokinesigram normalization method.
de Oliveira, José Magalhães
2017-02-01
Stabilometry is a technique that aims to study the body sway of human subjects, employing a force platform. The signal obtained from this technique refers to the position of the foot base ground-reaction vector, known as the center of pressure (CoP). The parameters calculated from the signal are used to quantify the displacement of the CoP over time; there is a large variability, both between and within subjects, which prevents the definition of normative values. The intersubject variability is related to differences between subjects in terms of their anthropometry, in conjunction with their muscle activation patterns (biomechanics); and the intrasubject variability can be caused by a learning effect or fatigue. Age and foot placement on the platform are also known to influence variability. Normalization is the main method used to decrease this variability and to bring distributions of adjusted values into alignment. In 1996, O'Malley proposed three normalization techniques to eliminate the effect of age and anthropometric factors from temporal-distance parameters of gait. These techniques were adopted to normalize the stabilometric signal by some authors. This paper proposes a new method of normalization of stabilometric signals to be applied in balance studies. The method was applied to a data set collected in a previous study, and the results of normalized and nonnormalized signals were compared. The results showed that the new method, if used in a well-designed experiment, can eliminate undesirable correlations between the analyzed parameters and the subjects' characteristics and show only the experimental conditions' effects.
Analysis of the Impedance Resonance of Piezoelectric Multi-Fiber Composite Stacks
NASA Technical Reports Server (NTRS)
Sherrit, S.; Djrbashian, A.; Bradford, S C
2013-01-01
Multi-Fiber CompositesTM (MFC's) produced by Smart Materials Corp behave essentially like thin planar stacks where each piezoelectric layer is composed of a multitude of fibers. We investigate the suitability of using previously published inversion techniques for the impedance resonances of monolithic co-fired piezoelectric stacks to the MFCTM to determine the complex material constants from the impedance data. The impedance equations examined in this paper are those based on the derivation. The utility of resonance techniques to invert the impedance data to determine the small signal complex material constants are presented for a series of MFC's. The technique was applied to actuators with different geometries and the real coefficients were determined to be similar within changes of the boundary conditions due to change of geometry. The scatter in the imaginary coefficient was found to be larger. The technique was also applied to the same actuator type but manufactured in different batches with some design changes in the non active portion of the actuator and differences in the dielectric and the electromechanical coupling between the two batches were easily measureable. It is interesting to note that strain predicted by small signal impedance analysis is much lower than high field stains. Since the model is based on material properties rather than circuit constants, it could be used for the direct evaluation of specific aging or degradation mechanisms in the actuator as well as batch sorting and adjustment of manufacturing processes.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2006-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2004-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Sakata, Shinichiro; Hallett, Kerrod B; Brandon, Matthew S; McBride, Craig A
2009-11-01
Endotracheal tube stabilization in patients with facial burns is crucial and often challenging. We present a simple method of securing an endotracheal tube using two orthodontic brackets bonded to the maxillary central incisor teeth and a 0.08'' stainless steel ligature wire. Our technique is less traumatic, and is easier to maintain oral hygiene than with previously described methods. This anchorage system takes 5 min to apply and can be removed on the ward without the need for a general anaesthetic.
Concurrent engineering: Spacecraft and mission operations system design
NASA Technical Reports Server (NTRS)
Landshof, J. A.; Harvey, R. J.; Marshall, M. H.
1994-01-01
Despite our awareness of the mission design process, spacecraft historically have been designed and developed by one team and then turned over as a system to the Mission Operations organization to operate on-orbit. By applying concurrent engineering techniques and envisioning operability as an essential characteristic of spacecraft design, tradeoffs can be made in the overall mission design to minimize mission lifetime cost. Lessons learned from previous spacecraft missions will be described, as well as the implementation of concurrent mission operations and spacecraft engineering for the Near Earth Asteroid Rendezvous (NEAR) program.
Component pattern analysis of chemicals using multispectral THz imaging system
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki
2004-04-01
We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
NASA Astrophysics Data System (ADS)
Murray, Natalie; Bourne, Neil; Field, John
1997-07-01
Brar and Bless pioneeered the use of plate impact upon bars as a technique for investigating the 1D stress loading of glass. We wish to extend this technique by applying VISAR and embedded stress gauge measurements to a symmetrical version of the test. In this configuration two rods impact one upon the other in a symmetrical version of the Taylor test geometry in which the impact is perfectly rigid in the centre of mass frame. Previous work in the laboratory has characterised the three glass types (float, borosilicate and a high density lead glass). These experiments will identify the 1D stress failure mechanisms from high-speed photography and the stress and particle velocity histories will be interpreted in the light of these results. The differences in response of the three glasses will be highlighted.
Logistic regression applied to natural hazards: rare event logistic regression with replications
NASA Astrophysics Data System (ADS)
Guns, M.; Vanacker, V.
2012-06-01
Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.
Preparing Colorful Astronomical Images III: Cosmetic Cleaning
NASA Astrophysics Data System (ADS)
Frattare, L. M.; Levay, Z. G.
2003-12-01
We present cosmetic cleaning techniques for use with mainstream graphics software (Adobe Photoshop) to produce presentation-quality images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope when producing photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to discuss the treatment of various detector-attributed artifacts such as cosmic rays, chip seams, gaps, optical ghosts, diffraction spikes and the like. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to final presentation images. Other pixel-to-pixel applications such as filter smoothing and global noise reduction will be discussed.
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters
NASA Technical Reports Server (NTRS)
Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise
2011-01-01
A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.
Phospholipid Fatty Acid Analysis: Past, Present and Future
NASA Astrophysics Data System (ADS)
Findlay, R. H.
2008-12-01
With their 1980 publication, Bobbie and White initiated the use of phospholipid fatty acids for the study of microbial communities. This method, integrated with a previously published biomass assay based on the colorimetric detection of orthophosphate liberated from phospholipids, provided the first quantitative method for determining microbial community structure. The method is based on a quantitative extraction of lipids from the sample matrix, isolation of the phospholipids, conversion of the phospholipid fatty acids to their corresponding fatty acid methyl esters (known by the acronym FAME) and the separation, identification and quantification of the FAME by gas chromatography. Early laboratory and field samples focused on correlating individual fatty acids to particular groups of microorganisms. Subsequent improvements to the methodology include reduced solvent volumes for extractions, improved sensitivity in the detection of orthophosphate and the use of solid phase extraction technology. Improvements in the field of gas chromatography also increased accessibility of the technique and it has been widely applied to water, sediment, soil and aerosol samples. Whole cell fatty acid analysis, a related but not equal technique, is currently used for phenotypic characterization in bacterial species descriptions and is the basis for a commercial, rapid bacterial identification system. In the early 1990ês application of multivariate statistical analysis, first cluster analysis and then principal component analysis, further improved the usefulness of the technique and allowed the development of a functional group approach to interpretation of phospholipid fatty acid profiles. Statistical techniques currently applied to the analysis of phospholipid fatty acid profiles include constrained ordinations and neutral networks. Using redundancy analysis, a form of constrained ordination, we have recently shown that both cation concentration and dissolved organic matter (DOM) quality are determinates of microbial community structure in forested headwater streams. One of the most exciting recent developments in phospholipid fatty acid analysis is the application of compound specific stable isotope analysis. We are currently applying this technique to stream sediments to help determine which microorganisms are involved in the initial processing of DOM and the technique promises to be a useful tool for assigning ecological function to microbial populations.
Spatial-temporal event detection in climate parameter imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenna, Sean Andrew; Gutierrez, Karen A.
Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to themore » earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.« less
Cataife, Guido
2014-03-01
We propose the use of previously developed small area estimation techniques to monitor obesity and dietary habits in developing countries and apply the model to Rio de Janeiro city. We estimate obesity prevalence rates at the Census Tract through a combinatorial optimization spatial microsimulation model that matches body mass index and socio-demographic data in Brazil's 2008-9 family expenditure survey with Census 2010 socio-demographic data. Obesity ranges from 8% to 25% in most areas and affects the poor almost as much as the rich. Male and female obesity rates are uncorrelated at the small area level. The model is an effective tool to understand the complexity of the problem and to aid in policy design. © 2013 Published by Elsevier Ltd.
VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)
NASA Astrophysics Data System (ADS)
Jenkins, J. S.; Tuomi, M.
2017-05-01
We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).
On quasi-thermal fluctuations near the plasma frequency in the outer plasmasphere: A case study
NASA Technical Reports Server (NTRS)
Lund, E. J.; Labelle, J.; Treumann, R. A.
1994-01-01
We present a derivation of the quasi-thermal electrostatic fluctuation power spectrum in a mult-Maxwellian plasma and show sample calculated spectra. We then apply this theory, which has been successfully applied in oter regions of space, to spectra from two Active Magnetospheric Particle Tracer Explorer/Ion Release Module (AMPTER IRM) passes through the duskside plasmasphere. WE show that the plasma line that is often seen in this region is usually quasi-thermal in origin. We obtain a refined estimate of the plasma frequency and infer a cold electron temperature which is consistent within a factor of 2 with both models and previous meausurements by other techniques, but closer investigation reveals that details of the plasma line cannot be explained with the ususal two isotropic Maxwellian model.
Bujar, Magdalena; McAuslane, Neil; Walker, Stuart R; Salek, Sam
2017-01-01
Introduction: Although pharmaceutical companies, regulatory authorities, and health technology assessment (HTA) agencies have been increasingly using decision-making frameworks, it is not certain whether these enable better quality decision making. This could be addressed by formally evaluating the quality of decision-making process within those organizations. The aim of this literature review was to identify current techniques (tools, questionnaires, surveys, and studies) for measuring the quality of the decision-making process across the three stakeholders. Methods: Using MEDLINE, Web of Knowledge, and other Internet-based search engines, a literature review was performed to systematically identify techniques for assessing quality of decision making in medicines development, regulatory review, and HTA. A structured search was applied using key words and a secondary review was carried out. In addition, the measurement properties of each technique were assessed and compared. Ten Quality Decision-Making Practices (QDMPs) developed previously were then used as a framework for the evaluation of techniques identified in the review. Due to the variation in studies identified, meta-analysis was inappropriate. Results: This review identified 13 techniques, where 7 were developed specifically to assess decision making in medicines' development, regulatory review, or HTA; 2 examined corporate decision making, and 4 general decision making. Regarding how closely each technique conformed to the 10 QDMPs, the 13 techniques assessed a median of 6 QDMPs, with a mode of 3 QDMPs. Only 2 techniques evaluated all 10 QDMPs, namely the Organizational IQ and the Quality of Decision Making Orientation Scheme (QoDoS), of which only one technique, QoDoS could be applied to assess decision making of both individuals and organizations, and it possessed generalizability to capture issues relevant to companies as well as regulatory authorities. Conclusion: This review confirmed a general paucity of research in this area, particularly regarding the development and systematic application of techniques for evaluating quality decision making, with no consensus around a gold standard. This review has identified QoDoS as the most promising available technique for assessing decision making in the lifecycle of medicines and the next steps would be to further test its validity, sensitivity, and reliability.
Microfluidic perfusion culture system for multilayer artery tissue models.
Yamagishi, Yuka; Masuda, Taisuke; Matsusaki, Michiya; Akashi, Mitsuru; Yokoyama, Utako; Arai, Fumihito
2014-11-01
We described an assembly technique and perfusion culture system for constructing artery tissue models. This technique differed from previous studies in that it does not require a solid biodegradable scaffold; therefore, using sheet-like tissues, this technique allowed the facile fabrication of tubular tissues can be used as model. The fabricated artery tissue models had a multilayer structure. The assembly technique and perfusion culture system were applicable to many different sizes of fabricated arteries. The shape of the fabricated artery tissue models was maintained by the perfusion culture system; furthermore, the system reproduced the in vivo environment and allowed mechanical stimulation of the arteries. The multilayer structure of the artery tissue model was observed using fluorescent dyes. The equivalent Young's modulus was measured by applying internal pressure to the multilayer tubular tissues. The aim of this study was to determine whether fabricated artery tissue models maintained their mechanical properties with developing. We demonstrated both the rapid fabrication of multilayer tubular tissues that can be used as model arteries and the measurement of their equivalent Young's modulus in a suitable perfusion culture environment.
Submucosal surgery: novel interventions in the third space.
Teitelbaum, Ezra N; Swanstrom, Lee L
2018-02-01
Traditional surgeries involve accessing body cavities, such as the abdomen and thorax, via incisions that divide skin and muscle. These operations result in postoperative pain and convalescence, and a risk of complications such as wound infection and hernia. The development of flexible endoscopy allowed diseases as varied as gastrointestinal bleeding and colon adenomas to be treated without incisions, but this technique is restricted by its endoluminal nature. A novel category of surgical endoscopic procedures has recently been developed that uses flexible endoscopic techniques to enter and access the submucosa of the gastrointestinal tract. Through this approach, the advantages of incisionless endoscopy can be applied to areas of the body that previously could only be reached with surgery. This Review introduces this new class of interventions by describing two examples of such submucosal surgeries for the treatment of benign gastrointestinal disease: per-oral endoscopic myotomy and per-oral pyloromyotomy. The approach to pre-procedure patient evaluation, operative technique, and the published outcomes are discussed, as well as potential future applications of similar techniques and procedures in this so-called third space. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detection of symmetric homoclinic orbits to saddle-centres in reversible systems
NASA Astrophysics Data System (ADS)
Yagasaki, Kazuyuki; Wagenknecht, Thomas
2006-02-01
We present a perturbation technique for the detection of symmetric homoclinic orbits to saddle-centre equilibria in reversible systems of ordinary differential equations. We assume that the unperturbed system has primary, symmetric homoclinic orbits, which may be either isolated or appear in a family, and use an idea similar to that of Melnikov’s method to detect homoclinic orbits in their neighbourhood. This technique also allows us to identify bifurcations of unperturbed or perturbed, symmetric homoclinic orbits. Our technique is of importance in applications such as nonlinear optics and water waves since homoclinic orbits to saddle-centre equilibria describe embedded solitons (ESs) in systems of partial differential equations representing physical models, and except for special cases their existence has been previously studied only numerically using shooting methods and continuation techniques. We apply the general theory to two examples, a four-dimensional system describing ESs in nonlinear optical media and a six-dimensional system which can possess a one-parameter family of symmetric homoclinic orbits in the unperturbed case. For these examples, the analysis is compared with numerical computations and an excellent agreement between both results is found.
Assessing Spontaneous Combustion Instability with Nonlinear Time Series Analysis
NASA Technical Reports Server (NTRS)
Eberhart, C. J.; Casiano, M. J.
2015-01-01
Considerable interest lies in the ability to characterize the onset of spontaneous instabilities within liquid propellant rocket engine (LPRE) combustion devices. Linear techniques, such as fast Fourier transforms, various correlation parameters, and critical damping parameters, have been used at great length for over fifty years. Recently, nonlinear time series methods have been applied to deduce information pertaining to instability incipiency hidden in seemingly stochastic combustion noise. A technique commonly used in biological sciences known as the Multifractal Detrended Fluctuation Analysis has been extended to the combustion dynamics field, and is introduced here as a data analysis approach complementary to linear ones. Advancing, a modified technique is leveraged to extract artifacts of impending combustion instability that present themselves a priori growth to limit cycle amplitudes. Analysis is demonstrated on data from J-2X gas generator testing during which a distinct spontaneous instability was observed. Comparisons are made to previous work wherein the data were characterized using linear approaches. Verification of the technique is performed by examining idealized signals and comparing two separate, independently developed tools.
Techniques for High Contrast Imaging in Multi-Star Systems II: Multi-Star Wavefront Control
NASA Technical Reports Server (NTRS)
Sirbu, D.; Thomas, S.; Belikov, R.
2017-01-01
Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments because of the diffraction and aberration leakage introduced by the additional stars, and as a result are not planned to be on direct imaging target lists. Multi-star wavefront control (MSWC) is a technique that uses a coronagraphic instrument's deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. Our previous paper introduced the Super-Nyquist Wavefront Control (SNWC) technique that uses a diffraction grating to enable the DM to generate high-contrast regions beyond the nominal controllable region. These two techniques can be combined to generate high-contrast regions for multi-star systems at any angular separations. As a case study, a high-contrast wavefront control (WC) simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged reaching 8 times 10(exp -9) mean contrast in 10 percent broadband light in one-sided dark holes from 1.6-5.5 lambda (wavelength) divided by D (distance).
Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models
NASA Astrophysics Data System (ADS)
Arroabarren, Ixone; Carlosena, Alfonso
2004-12-01
The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
Tendon 'turnover lengthening' technique.
Cerovac, S; Miranda, B H
2013-11-01
Tendon defect reconstruction is amongst the most technically challenging areas in hand surgery. Tendon substance deficiency reconstruction techniques include lengthening, grafting, two-stage reconstruction and tendon transfers, however each is associated with unique challenges over and above direct repair. We describe a novel 'turnover lengthening' technique for hand tendons that has successfully been applied to the repair of several cases, including a case of attritional flexor and traumatic extensor tendon rupture in two presented patients where primary tenorrhaphy was not possible. In both cases a good post-operative outcome was achieved, as the patients were happy having returned back to normal activities of daily living such that they were discharged 12 weeks post-operatively. Our technique avoids the additional morbidity and complications associated with grafting, transfers and two stage reconstructions. It is quick, simple and reproducible for defects not exceeding 3-4 cm, provides a means of immediate one stage reconstruction, no secondary donor site morbidity and does not compromise salvage by tendon transfer and/or two-stage reconstruction in cases of failure. To our knowledge no such technique has been previously been described to reconstruct such hand tendon defects. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Canine transurethral laser prostatectomy using a rotational technique
NASA Astrophysics Data System (ADS)
Cromeens, Douglas M.; Johnson, Douglas E.
1995-05-01
Conventional radical prostatectomy in the dog has historically been attended by unacceptably high incidence of urinary incontinence (80 - 100%). Ablation of the prostate can be accomplished in the dog by transurethral irradiation of the prostate with the Nd:YAG laser and a laterally deflecting fiber. Exposure has ranged between 40 and 60 watts for 60 seconds at 4 fixed locations. Although prostatectomies performed with the above described technique offers significant advantage over conventional prostatectomies, the high power density at each location can result in small submucosal explosions (`popcorn effect') that increase the potential for bleeding and rupture of the prostatic capsule. We describe a new technique in which the energy is applied continuously by a laser fiber rotating around a central point. Delivering 40 watts of Nd:YAG energy for 4 minutes using a new angle-delivery device (UrotekTM), we produced results comparable to those of other previously reported techniques in the canine model with two added advantages: (1) a more even application of heat resulting in no `popcorn' effect and (2) a more reliably predictable area of coagulative necrosis within a given axial plane. This technique should provide additional safety for the veterinary surgeon performing visual laser ablation of the prostate in the dog.
Sheppard, P S; Stevenson, J M; Graham, R B
2016-05-01
The objective of the present study was to determine if there is a sex-based difference in lifting technique across increasing-load conditions. Eleven male and 14 female participants (n = 25) with no previous history of low back disorder participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to a table positioned at 50% of their height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak motion capture system. Joint angles were calculated using a three-dimensional Euler rotation sequence. Principal component analysis (PCA) and single component reconstruction were applied to assess differences in lifting technique across the entire waveforms. Thirty-two PCs were retained from the five joints and three axes in accordance with the 90% trace criterion. Repeated-measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This is contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences, but agrees with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for five PCs of the lower limb (PC1 of ankle flexion, knee flexion, and knee adduction, as well as PC2 and PC3 of hip flexion) (p < 0.005). However, there was no significant effect of load on the thoracic and lumbar spine. It was concluded that when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increased male and female participants changed their lifting technique in a similar manner. Copyright © 2016. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, S.H.; Klinzing, G.E.; Cheng, Y.S.
1984-12-01
An in-situ technique for measuring hydrogen concentration (partial pressure) had been previously used to measure static properties (hydrogen solubilities, vapor pressures of hydrocarbons, etc.). Because of its good precision (2% relative error) and relatively short respond time (9.7 to 2.0 seconds at 589 to 728K), the technique was successfully applied to a dynamic study of hydrogenation reactions in this work. Furthermore, the technique is to be tested for industrial uses. Hydrogen/1-methylnaphthalene system was experimentally investigated in a one-liter autoclave equipped with a magnetically driven stirrer and temperature controlling devices. Catalytic hydrogenation of 1-methylnaphthalene was studied in the presence of sulfidedmore » Co-Mo-Al2O3 catalyst. In addition, the vapor/liquid equilibrium relationship was determined by using this technique. Hydrogenation reaction runs were performed at temperatures of 644.1, 658.0 and 672.0K and pressures up to 9.0 MPa. The ring hydrogenation, resulting in 1- and 5-methyltetralin, was found to be the dominant reaction. This is in agreement with cited literature. Effects of hydrogen partial pressure, operating temperature, as well as presulfided catalyst are also investigated and discussed in this work. The vapor pressure of 1-methylnaphthalene was measured over a temperature range of 555.2 to 672.0K. The results are in good agreement with literature data. Measurements for hydrogen solubility in 1-methylnaphthalene were conducted over temperature and pressure range of 598 to 670K and 5.2 to 8.8 MPa, respectively. Similar to previously reported results, the hydrogen solubility increases with increasing temperature when total pressure is held constant. A linear relation is found between the hydrogen solubility and hydrogen partial pressure. 21 refs., 13 figs., 10 tabs.« less
NASA Astrophysics Data System (ADS)
Pérez Ramos, A.; Robleda Prieto, G.
2016-06-01
Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.
Combining heuristic and statistical techniques in landslide hazard assessments
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni
2014-05-01
As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Mapping Topological Magnetization and Magnetic Skyrmions
NASA Astrophysics Data System (ADS)
Chess, Jordan J.
A 2014 study by the US Department of Energy conducted at Lawrence Berkeley National Laboratory estimated that U.S. data centers consumed 70 billion kWh of electricity. This represents about 1.8% of the total U.S. electricity consumption. Putting this in perspective 70 billion kWh of electricity is the equivalent of roughly 8 big nuclear reactors, or around double the nation's solar panel output. Developing new memory technologies capable of reducing this power consumption would be greatly beneficial as our demand for connectivity increases in the future. One newly emerging candidate for an information carrier in low power memory devices is the magnetic skyrmion. This magnetic texture is characterized by its specific non-trivial topology, giving it particle-like characteristics. Recent experimental work has shown that these skyrmions can be stabilized at room temperature and moved with extremely low electrical current densities. This rapidly developing field requires new measurement techniques capable of determining the topology of these textures at greater speed than previous approaches. In this dissertation, I give a brief introduction to the magnetic structures found in Fe/Gd multilayered systems. I then present newly developed techniques that streamline the analysis of Lorentz Transmission Electron Microscopy (LTEM) data. These techniques are then applied to further the understanding of the magnetic properties of these Fe/Gd based multilayered systems. This dissertation includes previously published and unpublished co-authored material.
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.
Poulson, S.R.; Sullivan, A.B.
2009-01-01
The upper Klamath River experiences a cyanobacterial algal bloom and poor water quality during the summer. Diel chemical and isotopic techniques have been employed in order to investigate the rates of biogeochemical processes. Four diel measurements of field parameters (temperature, pH, dissolved oxygen concentrations, and alkalinity) and stable isotope compositions (dissolved oxygen-??18O and dissolved inorganic carbon-??13C) have been performed between June 2007 and August 2008. Significant diel variations of pH, dissolved oxygen (DO) concentration, and DO-??18O were observed, due to varying rates of primary productivity vs. respiration vs. gas exchange with air. Diel cycles are generally similar to those previously observed in river systems, although there are also differences compared to previous studies. In large part, these different diel signatures are the result of the low turbulence of the upper Klamath River. Observed changes in the diel signatures vs. sampling date reflect the evolution of the status of the algal bloom over the course of the summer. Results indicate the potential utility of applying diel chemical and stable isotope techniques to investigate the rates of biogeochemical cycles in slow-moving rivers, lakes, and reservoirs, but also illustrate the increased complexity of stable isotope dynamics in these low-turbulence systems compared to well-mixed aquatic systems. ?? 2009 Elsevier B.V.
D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
A Review of Computational Methods for Finding Non-Coding RNA Genes
Abbas, Qaisar; Raza, Syed Mansoor; Biyabani, Azizuddin Ahmed; Jaffar, Muhammad Arfan
2016-01-01
Finding non-coding RNA (ncRNA) genes has emerged over the past few years as a cutting-edge trend in bioinformatics. There are numerous computational intelligence (CI) challenges in the annotation and interpretation of ncRNAs because it requires a domain-related expert knowledge in CI techniques. Moreover, there are many classes predicted yet not experimentally verified by researchers. Recently, researchers have applied many CI methods to predict the classes of ncRNAs. However, the diverse CI approaches lack a definitive classification framework to take advantage of past studies. A few review papers have attempted to summarize CI approaches, but focused on the particular methodological viewpoints. Accordingly, in this article, we summarize in greater detail than previously available, the CI techniques for finding ncRNAs genes. We differentiate from the existing bodies of research and discuss concisely the technical merits of various techniques. Lastly, we review the limitations of ncRNA gene-finding CI methods with a point-of-view towards the development of new computational tools. PMID:27918472
Stratoudaki, Theodosia; Ellwood, Robert; Sharples, Steve; Clark, Matthew; Somekh, Michael G; Collison, Ian J
2011-04-01
A dual frequency mixing technique has been developed for measuring velocity changes caused by material nonlinearity. The technique is based on the parametric interaction between two surface acoustic waves (SAWs): The low frequency pump SAW generated by a transducer and the high frequency probe SAW generated and detected using laser ultrasonics. The pump SAW stresses the material under the probe SAW. The stress (typically <5 MPa) is controlled by varying the timing between the pump and probe waves. The nonlinear interaction is measured as a phase modulation of the probe SAW and equated to a velocity change. The velocity-stress relationship is used as a measure of material nonlinearity. Experiments were conducted to observe the pump-probe interaction by changing the pump frequency and compare the nonlinear response of aluminum and fused silica. Experiments showed these two materials had opposite nonlinear responses, consistent with previously published data. The technique could be applied to life-time predictions of engineered components by measuring changes in nonlinear response caused by fatigue.
NASA Astrophysics Data System (ADS)
Fabian, K.
2012-10-01
In a recent article, Mitra et al. (2011) propose a modified IRM technique to identify the symmetry of magnetic anisotropy in single domain particle ensembles. They apply this technique to support an earlier suggestion that single domain grains in young mid-ocean ridge basalts (MORB) exhibit multiaxial anisotropy. Here it is shown that the design of their measurement is flawed, in that they do not take into account that the outcome essentially depends on the initial demagnetization state of the sample before the experiment, and on the coercivity distribution of the sample. Because all MORB specimens measured by Mitra et al. (2011) carried their original NRM, which closely resembles a thermally demagnetized state, their measurements first of all reflect the coercivity distributions and domain states of the samples, and contain little or no information about the symmetry of the magnetic anisotropy. All arguments previously put forward in favour of a dominant uniaxial anisotropy in MORB are therefore still valid.
Height-selective etching for regrowth of self-aligned contacts using MBE
NASA Astrophysics Data System (ADS)
Burek, G. J.; Wistey, M. A.; Singisetti, U.; Nelson, A.; Thibeault, B. J.; Bank, S. R.; Rodwell, M. J. W.; Gossard, A. C.
2009-03-01
Advanced III-V transistors require unprecedented low-resistance contacts in order to simultaneously scale bandwidth, fmax and ft with the physical active region [M.J.W. Rodwell, M. Le, B. Brar, in: Proceedings of the IEEE, 96, 2008, p. 748]. Low-resistance contacts have been previously demonstrated using molecular beam epitaxy (MBE), which provides active doping above 4×10 19 cm -3 and permits in-situ metal deposition for the lowest resistances [U. Singisetti, M.A. Wistey, J.D. Zimmerman, B.J. Thibeault, M.J.W. Rodwell, A.C. Gossard, S.R. Bank, Appl. Phys. Lett., submitted]. But MBE is a blanket deposition technique, and applying MBE regrowth to deep-submicron lateral device dimensions is difficult even with advanced lithography techniques. We present a simple method for selectively etching undesired regrowth from the gate or mesa of a III-V MOSFET or laser, resulting in self-aligned source/drain contacts regardless of the device dimensions. This turns MBE into an effectively selective area growth technique.
Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.
Drinkwater, Benjamin; Charleston, Michael A
2014-01-01
Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.
Airglow studies using observations made with the GLO instrument on the Space Shuttle
NASA Astrophysics Data System (ADS)
Alfaro Suzan, Ana Luisa
2009-12-01
Our understanding of Earth's upper atmosphere has advanced tremendously over the last few decades due to our enhanced capacity for making remote observations from space. Space based observations of Earth's daytime and nighttime airglow emissions are very good examples of such enhancements to our knowledge. The terrestrial nighttime airglow, or nightglow, is barely discernible to the naked eye as viewed from Earth's surface. However, it is clearly visible from space - as most astronauts have been amazed to report. The nightglow consists of emissions of ultraviolet, visible and near-infrared radiation from electronically excited oxygen molecules and atoms and vibrationally excited OH molecules. It mostly emanates from a 10 km thick layer located about 100 km above Earth's surface. Various photochemical models have been proposed to explain the production of the emitting species. In this study some unique observations of Earth's nightglow made with the GLO instrument on NASA's Space Shuttle, are analyzed to assess the proposed excitation models. Previous analyses of these observations by Broadfoot and Gardner (2001), performed using a 1-D inversion technique, have indicated significant spatial structures and have raised serious questions about the proposed nightglow excitation models. However, the observation of such strong spatial structures calls into serious question the appropriateness of the adopted 1-D inversion technique and, therefore, the validity of the conclusions. In this study a more rigorous 2-D tomographic inversion technique is developed and applied to the available GLO data to determine if some of the apparent discrepancies can be explained by the limitations of the previously applied 1-D inversion approach. The results of this study still reveal some potentially serious inadequacies in the proposed photochemical models. However, alternative explanations for the discrepancies between the GLO observations and the model expectations are suggested. These include upper atmospheric tidal effects and possible errors in the pointing of the GLO instrument.
NASA Astrophysics Data System (ADS)
Schwehr, K.; Driscoll, N.; Tauxe, L.
2004-12-01
Categorizing sediment history using Anisotropy of Magnetic Susceptibility (AMS) has been a long standing challenge for the paleomagnetic community. The goal is to have a robust test of shape fabrics that allows workers to classify sediments in terms of being primary depositional fabric, deposition in with currents, or altered fabrics. Additionally, it is important to be able to distinguish altered fabrics into such classes as slumps, crypto-slumps, drilling deformation (such as fluidization from drilling mud and flow-in), and so forth. To try to bring a unified test scheme to AMS interpretation, we are using three example test cases. First is the Owens Lake OL92 core, which has provided previous workers with a long core example in a lacustrian environment. OL92 was classified into five zones based on visual observations of the core photographs. Using these groupings, Rosenbaum et al. (2000) was able to use the deflection of the minimum eigen vector from vertical to classify each individual AMS sample. Second is the Ardath Shale location, which provides a clear case of a lithified outcrop scale problem that showed success with the bootstrap eigen value test. Finally is the Gaviota Slide in the Santa Barbara Basin, which provides usage of 1-2 meter gravity cores. Previous work has focused on Flinn, Jelinek, and bootstrap plots of eigen values. In supporting the shape characterization we have also used a 95% confidence F-Test by means of Hext's statistical work. We have extended the F-Test into a promising new plot of the F12 and F23 confidence values, which shows good clustering in early tests. We have applied all of the available techniques to the above three test cases and will present how each technique either succeeds or fails. Since each method has its own strengths and weaknesses, it is clear that the community needs to carefully evaluate which technique should be applied to any particular problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmand, Maryam
2013-05-19
The development of better energy conversion and storage devices, such as fuel cells and batteries, is crucial for reduction of our global carbon footprint and improving the quality of the air we breathe. However, both of these technologies face important challenges. The development of lower cost and better electrode materials, which are more durable and allow more control over the electrochemical reactions occurring at the electrode/electrolyte interface, is perhaps most important for meeting these challenges. Hence, full characterization of the electrochemical processes that occur at the electrodes is vital for intelligent design of more energy efficient electrodes. X-ray absorption spectroscopymore » (XAS) is a short-range order, element specific technique that can be utilized to probe the processes occurring at operating electrode surfaces, as well for studying the amorphous materials and nano-particles making up the electrodes. It has been increasingly used in recent years to study fuel cell catalysts through application of the and #916; and mgr; XANES technique, in combination with the more traditional X-ray Absorption Near Edge Structure (XANES) and Extended X-ray Absorption Fine Structure (EXAFS) techniques. The and #916; and mgr; XANES data analysis technique, previously developed and applied to heterogeneous catalysts and fuel cell electrocatalysts by the GWU group, was extended in this work to provide for the first time space resolved adsorbate coverages on both electrodes of a direct methanol fuel cell. Even more importantly, the and #916; and mgr; technique was applied for the first time to battery relevant materials, where bulk properties such as the oxidation state and local geometry of a cathode are followed.« less
Computer-assisted expert case definition in electronic health records.
Walker, Alexander M; Zhou, Xiaofeng; Ananthakrishnan, Ashwin N; Weiss, Lisa S; Shen, Rongjun; Sobel, Rachel E; Bate, Andrew; Reynolds, Robert F
2016-02-01
To describe how computer-assisted presentation of case data can lead experts to infer machine-implementable rules for case definition in electronic health records. As an illustration the technique has been applied to obtain a definition of acute liver dysfunction (ALD) in persons with inflammatory bowel disease (IBD). The technique consists of repeatedly sampling new batches of case candidates from an enriched pool of persons meeting presumed minimal inclusion criteria, classifying the candidates by a machine-implementable candidate rule and by a human expert, and then updating the rule so that it captures new distinctions introduced by the expert. Iteration continues until an update results in an acceptably small number of changes to form a final case definition. The technique was applied to structured data and terms derived by natural language processing from text records in 29,336 adults with IBD. Over three rounds the technique led to rules with increasing predictive value, as the experts identified exceptions, and increasing sensitivity, as the experts identified missing inclusion criteria. In the final rule inclusion and exclusion terms were often keyed to an ALD onset date. When compared against clinical review in an independent test round, the derived final case definition had a sensitivity of 92% and a positive predictive value of 79%. An iterative technique of machine-supported expert review can yield a case definition that accommodates available data, incorporates pre-existing medical knowledge, is transparent and is open to continuous improvement. The expert updates to rules may be informative in themselves. In this limited setting, the final case definition for ALD performed better than previous, published attempts using expert definitions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gibergans-Báguena, J.; Llasat, M. C.
2007-12-01
The objective of this paper is to present the improvement of quantitative forecasting of daily rainfall in Catalonia (NE Spain) from an analogues technique, taking into account synoptic and local data. This method is based on an analogues sorting technique: meteorological situations similar to the current one, in terms of 700 and 1000 hPa geopotential fields at 00 UTC, complemented with the inclusion of some thermodynamic parameters extracted from an historical data file. Thermodynamic analysis acts as a highly discriminating feature for situations in which the synoptic situation fails to explain either atmospheric phenomena or rainfall distribution. This is the case in heavy rainfall situations, where the existence of instability and high water vapor content is essential. With the objective of including these vertical thermodynamic features, information provided by the Palma de Mallorca radiosounding (Spain) has been used. Previously, a selection of the most discriminating thermodynamic parameters for the daily rainfall was made, and then the analogues technique applied to them. Finally, three analog forecasting methods were applied for the quantitative daily rainfall forecasting in Catalonia. The first one is based on analogies from geopotential fields to synoptic scale; the second one is exclusively based on the search of similarity from local thermodynamic information and the third method combines the other two methods. The results show that this last method provides a substantial improvement of quantitative rainfall estimation.
NASA Astrophysics Data System (ADS)
Capriotti, Margherita; Sternini, Simone; Lanza di Scalea, Francesco; Mariani, Stefano
2016-04-01
In the field of non-destructive evaluation, defect detection and visualization can be performed exploiting different techniques relying either on an active or a passive approach. In the following paper the passive technique is investigated due to its numerous advantages and its application to thermography is explored. In previous works, it has been shown that it is possible to reconstruct the Green's function between any pair of points of a sensing grid by using noise originated from diffuse fields in acoustic environments. The extraction of the Green's function can be achieved by cross-correlating these random recorded waves. Averaging, filtering and length of the measured signals play an important role in this process. This concept is here applied in an NDE perspective utilizing thermal fluctuations present on structural materials. Temperature variations interacting with thermal properties of the specimen allow for the characterization of the material and its health condition. The exploitation of the thermographic image resolution as a dense grid of sensors constitutes the basic idea underlying passive thermography. Particular attention will be placed on the creation of a proper diffuse thermal field, studying the number, placement and excitation signal of heat sources. Results from numerical simulations will be presented to assess the capabilities and performances of the passive thermal technique devoted to defect detection and imaging of structural components.
NASA Astrophysics Data System (ADS)
Polimeridis, Athanasios G.; Reid, M. T. H.; Jin, Weiliang; Johnson, Steven G.; White, Jacob K.; Rodriguez, Alejandro W.
2015-10-01
We describe a fluctuating volume-current formulation of electromagnetic fluctuations that extends our recent work on heat exchange and Casimir interactions between arbitrarily shaped homogeneous bodies [A. W. Rodriguez, M. T. H. Reid, and S. G. Johnson, Phys. Rev. B 88, 054305 (2013), 10.1103/PhysRevB.88.054305] to situations involving incandescence and luminescence problems, including thermal radiation, heat transfer, Casimir forces, spontaneous emission, fluorescence, and Raman scattering, in inhomogeneous media. Unlike previous scattering formulations based on field and/or surface unknowns, our work exploits powerful techniques from the volume-integral equation (VIE) method, in which electromagnetic scattering is described in terms of volumetric, current unknowns throughout the bodies. The resulting trace formulas (boxed equations) involve products of well-studied VIE matrices and describe power and momentum transfer between objects with spatially varying material properties and fluctuation characteristics. We demonstrate that thanks to the low-rank properties of the associated matrices, these formulas are susceptible to fast-trace computations based on iterative methods, making practical calculations tractable. We apply our techniques to study thermal radiation, heat transfer, and fluorescence in complicated geometries, checking our method against established techniques best suited for homogeneous bodies as well as applying it to obtain predictions of radiation from complex bodies with spatially varying permittivities and/or temperature profiles.
Aeroshell Design Techniques for Aerocapture Entry Vehicles
NASA Technical Reports Server (NTRS)
Dyke, R. Eric; Hrinda, Glenn A.
2004-01-01
A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.
Graff, Mario; Poli, Riccardo; Flores, Juan J
2013-01-01
Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.
Liley, James; Wallace, Chris
2015-02-01
Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.
Belmon, Laura S; Middelweerd, Anouk; Te Velde, Saskia J; Brug, Johannes
2015-11-12
Interventions delivered through new device technology, including mobile phone apps, appear to be an effective method to reach young adults. Previous research indicates that self-efficacy and social support for physical activity and self-regulation behavior change techniques (BCT), such as goal setting, feedback, and self-monitoring, are important for promoting physical activity; however, little is known about evaluations by the target population of BCTs applied to physical activity apps and whether these preferences are associated with individual personality characteristics. This study aimed to explore young adults' opinions regarding BCTs (including self-regulation techniques) applied in mobile phone physical activity apps, and to examine associations between personality characteristics and ratings of BCTs applied in physical activity apps. We conducted a cross-sectional online survey among healthy 18 to 30-year-old adults (N=179). Data on participants' gender, age, height, weight, current education level, living situation, mobile phone use, personality traits, exercise self-efficacy, exercise self-identity, total physical activity level, and whether participants met Dutch physical activity guidelines were collected. Items for rating BCTs applied in physical activity apps were selected from a hierarchical taxonomy for BCTs, and were clustered into three BCT categories according to factor analysis: "goal setting and goal reviewing," "feedback and self-monitoring," and "social support and social comparison." Most participants were female (n=146), highly educated (n=169), physically active, and had high levels of self-efficacy. In general, we observed high ratings of BCTs aimed to increase "goal setting and goal reviewing" and "feedback and self-monitoring," but not for BCTs addressing "social support and social comparison." Only 3 (out of 16 tested) significant associations between personality characteristics and BCTs were observed: "agreeableness" was related to more positive ratings of BCTs addressing "goal setting and goal reviewing" (OR 1.61, 95% CI 1.06-2.41), "neuroticism" was related to BCTs addressing "feedback and self-monitoring" (OR 0.76, 95% CI 0.58-1.00), and "exercise self-efficacy" was related to a high rating of BCTs addressing "feedback and self-monitoring" (OR 1.06, 95% CI 1.02-1.11). No associations were observed between personality characteristics (ie, personality, exercise self-efficacy, exercise self-identity) and participants' ratings of BCTs addressing "social support and social comparison." Young Dutch physically active adults rate self-regulation techniques as most positive and techniques addressing social support as less positive among mobile phone apps that aim to promote physical activity. Such ratings of BCTs differ according to personality traits and exercise self-efficacy. Future research should focus on which behavior change techniques in app-based interventions are most effective to increase physical activity.
Thygesen, Uffe Høgsbro
2016-03-01
We consider organisms which use a renewal strategy such as run-tumble when moving in space, for example to perform chemotaxis in chemical gradients. We derive a diffusion approximation for the motion, applying a central limit theorem due to Anscombe for renewal-reward processes; this theorem has not previously been applied in this context. Our results extend previous work, which has established the mean drift but not the diffusivity. For a classical model of tumble rates applied to chemotaxis, we find that the resulting chemotactic drift saturates to the swimming velocity of the organism when the chemical gradients grow increasingly steep. The dispersal becomes anisotropic in steep gradients, with larger dispersal across the gradient than along the gradient. In contrast to one-dimensional settings, strong bias increases dispersal. We next include Brownian rotation in the model and find that, in limit of high chemotactic sensitivity, the chemotactic drift is 64% of the swimming velocity, independent of the magnitude of the Brownian rotation. We finally derive characteristic timescales of the motion that can be used to assess whether the diffusion limit is justified in a given situation. The proposed technique for obtaining diffusion approximations is conceptually and computationally simple, and applicable also when statistics of the motion is obtained empirically or through Monte Carlo simulation of the motion.
The Next Era: Deep Learning in Pharmaceutical Research.
Ekins, Sean
2016-11-01
Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule's properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique.
Multiphysics superensemble forecast applied to Mediterranean heavy precipitation situations
NASA Astrophysics Data System (ADS)
Vich, M.; Romero, R.
2010-11-01
The high-impact precipitation events that regularly affect the western Mediterranean coastal regions are still difficult to predict with the current prediction systems. Bearing this in mind, this paper focuses on the superensemble technique applied to the precipitation field. Encouraged by the skill shown by a previous multiphysics ensemble prediction system applied to western Mediterranean precipitation events, the superensemble is fed with this ensemble. The training phase of the superensemble contributes to the actual forecast with weights obtained by comparing the past performance of the ensemble members and the corresponding observed states. The non-hydrostatic MM5 mesoscale model is used to run the multiphysics ensemble. Simulations are performed with a 22.5 km resolution domain (Domain 1 in http://mm5forecasts.uib.es) nested in the ECMWF forecast fields. The period between September and December 2001 is used to train the superensemble and a collection of 19~MEDEX cyclones is used to test it. The verification procedure involves testing the superensemble performance and comparing it with that of the poor-man and bias-corrected ensemble mean and the multiphysic EPS control member. The results emphasize the need of a well-behaved training phase to obtain good results with the superensemble technique. A strategy to obtain this improved training phase is already outlined.
A short review on a complication of lumbar spine surgery: CSF leak.
Menon, Sajesh K; Onyia, Chiazor U
2015-12-01
Cerebrospinal fluid (CSF) leak is a common complication of surgery involving the lumbar spine. Over the past decades, there has been significant advancement in understanding the basis, management and techniques of treatment for post-operative CSF leak following lumbar spine surgery. In this article, we review previous work in the literature on the various factors and technical errors during or after lumbar spine surgery that may lead to this feared complication, the available options of management with focus on the various techniques employed, the outcomes and also to highlight on the current trends. We also discuss the presentation, factors contributing to its development, basic concepts and practical aspects of the management with emphasis on the different techniques of treatment. Different outcomes following various techniques of managing post-operative CSF leak after lumbar spine surgery have been well described in the literature. However, there is currently no most ideal technique among the available options. The choice of which technique to be applied in each case is dependent on each surgeon's cumulative experience as well as a clear understanding of the contributory underlying factors in each patient, the nature and site of the leak, the available facilities and equipment. Copyright © 2015 Elsevier B.V. All rights reserved.
Towards the estimation of effect measures in studies using respondent-driven sampling.
Rotondi, Michael A
2014-06-01
Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
Low-dimensional and Data Fusion Techniques Applied to a Rectangular Supersonic Multi-stream Jet
NASA Astrophysics Data System (ADS)
Berry, Matthew; Stack, Cory; Magstadt, Andrew; Ali, Mohd; Gaitonde, Datta; Glauser, Mark
2017-11-01
Low-dimensional models of experimental and simulation data for a complex supersonic jet were fused to reconstruct time-dependent proper orthogonal decomposition (POD) coefficients. The jet consists of a multi-stream rectangular single expansion ramp nozzle, containing a core stream operating at Mj , 1 = 1.6 , and bypass stream at Mj , 3 = 1.0 with an underlying deck. POD was applied to schlieren and PIV data to acquire the spatial basis functions. These eigenfunctions were projected onto their corresponding time-dependent large eddy simulation (LES) fields to reconstruct the temporal POD coefficients. This reconstruction was able to resolve spectral peaks that were previously aliased due to the slower sampling rates of the experiments. Additionally, dynamic mode decomposition (DMD) was applied to the experimental and LES datasets, and the spatio-temporal characteristics were compared to POD. The authors would like to acknowledge AFOSR, program manager Dr. Doug Smith, for funding this research, Grant No. FA9550-15-1-0435.
Bethe-Boltzmann hydrodynamics and spin transport in the XXZ chain
NASA Astrophysics Data System (ADS)
Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.
2018-01-01
Quantum integrable systems, such as the interacting Bose gas in one dimension and the XXZ quantum spin chain, have an extensive number of local conserved quantities that endow them with exotic thermalization and transport properties. We discuss recently introduced hydrodynamic approaches for such integrable systems from the viewpoint of kinetic theory and extend the previous works by proposing a numerical scheme to solve the hydrodynamic equations for finite times and arbitrary locally equilibrated initial conditions. We then discuss how such methods can be applied to describe nonequilibrium steady states involving ballistic heat and spin currents. In particular, we show that the spin Drude weight in the XXZ chain, previously accessible only by rigorous techniques of limited scope or controversial thermodynamic Bethe ansatz arguments, may be evaluated from hydrodynamics in very good agreement with density-matrix renormalization group calculations.
A Technique of Treating Negative Weights in WENO Schemes
NASA Technical Reports Server (NTRS)
Shi, Jing; Hu, Changqing; Shu, Chi-Wang
2000-01-01
High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.
Image processing on the image with pixel noise bits removed
NASA Astrophysics Data System (ADS)
Chuang, Keh-Shih; Wu, Christine
1992-06-01
Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.
Samiee, K. T.; Foquet, M.; Guo, L.; Cox, E. C.; Craighead, H. G.
2005-01-01
Fluorescence correlation spectroscopy (FCS) has demonstrated its utility for measuring transport properties and kinetics at low fluorophore concentrations. In this article, we demonstrate that simple optical nanostructures, known as zero-mode waveguides, can be used to significantly reduce the FCS observation volume. This, in turn, allows FCS to be applied to solutions with significantly higher fluorophore concentrations. We derive an empirical FCS model accounting for one-dimensional diffusion in a finite tube with a simple exponential observation profile. This technique is used to measure the oligomerization of the bacteriophage λ repressor protein at micromolar concentrations. The results agree with previous studies utilizing conventional techniques. Additionally, we demonstrate that the zero-mode waveguides can be used to assay biological activity by measuring changes in diffusion constant as a result of ligand binding. PMID:15613638
Fe Oxides on Ag Surfaces: Structure and Reactivity
Shipilin, M.; Lundgren, E.; Gustafson, J.; ...
2016-09-09
One layer thick iron oxide films are attractive from both applied and fundamental science perspectives. The structural and chemical properties of these systems can be tuned by changing the substrate, making them promising materials for heterogeneous catalysis. In the present work, we investigate the structure of FeO(111) monolayer films grown on Ag(100) and Ag(111) substrates by means of microscopy and diffraction techniques and compare it with the structure of FeO(111) grown on other substrates reported in literature. We also study the NO adsorption properties of FeO(111)/Ag(100) and FeO(111)/Ag(111) systems utilizing different spectroscopic techniques. Finally, we discuss similarities and differences inmore » the data obtained from adsorption experiments and compare it with previous results for FeO(111)/Pt(111).« less
Fe Oxides on Ag Surfaces: Structure and Reactivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipilin, M.; Lundgren, E.; Gustafson, J.
One layer thick iron oxide films are attractive from both applied and fundamental science perspectives. The structural and chemical properties of these systems can be tuned by changing the substrate, making them promising materials for heterogeneous catalysis. In the present work, we investigate the structure of FeO(111) monolayer films grown on Ag(100) and Ag(111) substrates by means of microscopy and diffraction techniques and compare it with the structure of FeO(111) grown on other substrates reported in literature. We also study the NO adsorption properties of FeO(111)/Ag(100) and FeO(111)/Ag(111) systems utilizing different spectroscopic techniques. Finally, we discuss similarities and differences inmore » the data obtained from adsorption experiments and compare it with previous results for FeO(111)/Pt(111).« less
NASA Astrophysics Data System (ADS)
Mirapeix, J.; García-Allende, P. B.; Cobo, A.; Conde, O.; López-Higuera, J. M.
2007-07-01
A new spectral processing technique designed for its application in the on-line detection and classification of arc-welding defects is presented in this paper. A non-invasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed by means of two consecutive stages. A compression algorithm is first applied to the data allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in a previous paper, giving rise to an improvement in the performance of the monitoring system.
Local repair of stoma prolapse: Case report of an in vivo application of linear stapler devices.
Monette, Margaret M; Harney, Rodney T; Morris, Melanie S; Chu, Daniel I
2016-11-01
One of the most common late complications following stoma construction is prolapse. Although the majority of prolapse can be managed conservatively, surgical revision is required with incarceration/strangulation and in certain cases laparotomy and/or stoma reversal are not appropriate. This report will inform surgeons on safe and effective approaches to revising prolapsed stomas using local techniques. A 58 year old female with an obstructing rectal cancer previously received a diverting transverse loop colostomy. On completion of neoadjuvant treatment, re-staging found new lung metastases. She was scheduled for further chemotherapy but incarcerated a prolapsed segment of her loop colostomy. As there was no plan to resect her primary rectal tumor at the time, a local revision was preferred. Linear staplers were applied to the prolapsed stoma in step-wise fashion to locally revise the incarcerated prolapse. Post-operative recovery was satisfactory with no complications or recurrence of prolapse. We detail in step-wise fashion a technique using linear stapler devices that can be used to locally revise prolapsed stoma segments and therefore avoid a laparotomy. The procedure is technically easy to perform with satisfactory post-operative outcomes. We additionally review all previous reports of local repairs and show the evolution of local prolapse repair to the currently reported technique. This report offers surgeons an alternative, efficient and effective option for addressing the complications of stoma prolapse. While future studies are needed to assess long-term outcomes, in the short-term, our report confirms the safety and effectiveness of this local technique.
A Rigorous Attempt to Verify Interstellar Glycine
NASA Technical Reports Server (NTRS)
Snyder, L. E.; Lovas, F. J.; Hollis, J. M.; Friedel, D. N.; Jewell, P. R.; Remijan, A.; Ilyushin, V. V.; Alekseev, E. A.; Dyubko, S. F.
2004-01-01
In 2003, Kuan, Charnley, and co-workers reported the detection of interstellar glycine (NH2CH2COOH) based on observations of 27 lines in 19 different spectral bands in one or more of the sources Sgr BP(N-LMH), Orion KL, and W51 e1/e2. They supported their detection report with rotational temperature diagrams for all three sources. In this paper, we present essential criteria which can be used in a straightforward analysis technique to confirm the identity of an interstellar asymmetric rotor such as glycine. We use new laboratory measurements of glycine as a basis for applying this analysis technique, both to our previously unpublished 12 m telescope data and to the previously published SEST data of Nummelin and colleagues. We conclude that key lines necessary for an interstellar glycine identification have not yet been found. We identify several common molecular candidates that should be examined further as more likely carriers of the lines reported as glycine. Finally, we illustrate that rotational temperature diagrams used without the support of correct spectroscopic assignments are not a reliable tool for the identification of interstellar molecules. Subject headings: ISM: abundances - ISM: clouds - ISM: individual (Sagittarius B2[N-
A thin, dense crust for Mercury
NASA Astrophysics Data System (ADS)
Sori, Michael M.
2018-05-01
Crustal thickness is a crucial geophysical parameter in understanding the geology and geochemistry of terrestrial planets. Recent development of mathematical techniques suggests that previous studies based on assumptions of isostasy overestimated crustal thickness on some of the solid bodies of the solar system, leading to a need to revisit those analyses. Here, I apply these techniques to Mercury. Using MESSENGER-derived elemental abundances, I calculate a map of grain density (average 2974 ± 89 kg/m3) which shows that Pratt isostasy is unlikely to be a major compensation mechanism of Mercury's topography. Assuming Airy isostasy, I find the best fit value for Mercury's mean crustal thickness is 26 ± 11 km, 25% lower than the most recently reported and previously thinnest number. Several geological implications follow from this relatively low value for crustal thickness, including showing that the largest impacts very likely excavated mantle material onto Mercury's surface. The new results also show that Mercury and the Moon have a similar proportion of their rocky silicates composing their crusts, and thus Mercury is not uniquely efficient at crustal production amongst terrestrial bodies. Higher resolution topography and gravity data, especially for the southern hemisphere, will be necessary to refine Mercury's crustal parameters further.
Control of Flow Structure in Square Cross-Sectioned U Bend using Numerical Modeling
NASA Astrophysics Data System (ADS)
Yavuz, Mehmet Metin; Guden, Yigitcan
2014-11-01
Due to the curvature in U-bends, the flow development involves complex flow structures including Dean vortices and high levels of turbulence that are quite critical in considering noise problems and structural failure of the ducts. Computational fluid dynamic (CFD) models are developed using ANSYS Fluent to analyze and to control the flow structure in a square cross-sectioned U-bend with a radius of curvature Rc/D = 0.65. The predictions of velocity profiles on different angular positions of the U-bend are compared against the experimental results available in the literature and the previous numerical studies. The performances of different turbulence models are evaluated to propose the best numerical approach that has high accuracy with reduced computation time. The numerical results of the present study indicate improvements with respect to the previous numerical predictions and very good agreement with the available experimental results. In addition, a flow control technique is utilized to regulate the flow inside the bend. The elimination of Dean vortices along with significant reduction in turbulence levels in different cross flow planes are successfully achieved when the flow control technique is applied. The project is supported by Meteksan Defense Industries, Inc.
Pang, Yonggang; Tsigkou, Olga; Spencer, Joel A; Lin, Charles P; Neville, Craig; Grottkau, Brian
2015-10-01
Vascularization is a key challenge in tissue engineering. Three-dimensional structure and microcirculation are two fundamental parameters for evaluating vascularization. Microscopic techniques with cellular level resolution, fast continuous observation, and robust 3D postimage processing are essential for evaluation, but have not been applied previously because of technical difficulties. In this study, we report novel video-rate confocal microscopy and 3D postimage processing techniques to accomplish this goal. In an immune-deficient mouse model, vascularized bone tissue was successfully engineered using human bone marrow mesenchymal stem cells (hMSCs) and human umbilical vein endothelial cells (HUVECs) in a poly (D,L-lactide-co-glycolide) (PLGA) scaffold. Video-rate (30 FPS) intravital confocal microscopy was applied in vitro and in vivo to visualize the vascular structure in the engineered bone and the microcirculation of the blood cells. Postimage processing was applied to perform 3D image reconstruction, by analyzing microvascular networks and calculating blood cell viscosity. The 3D volume reconstructed images show that the hMSCs served as pericytes stabilizing the microvascular network formed by HUVECs. Using orthogonal imaging reconstruction and transparency adjustment, both the vessel structure and blood cells within the vessel lumen were visualized. Network length, network intersections, and intersection densities were successfully computed using our custom-developed software. Viscosity analysis of the blood cells provided functional evaluation of the microcirculation. These results show that by 8 weeks, the blood vessels in peripheral areas function quite similarly to the host vessels. However, the viscosity drops about fourfold where it is only 0.8 mm away from the host. In summary, we developed novel techniques combining intravital microscopy and 3D image processing to analyze the vascularization in engineered bone. These techniques have broad applicability for evaluating vascularization in other engineered tissues as well.
Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.
Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson
2018-02-01
Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.
Virtual fringe projection system with nonparallel illumination based on iteration
NASA Astrophysics Data System (ADS)
Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian
2017-06-01
Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.
Signal processing methods for in-situ creep specimen monitoring
NASA Astrophysics Data System (ADS)
Guers, Manton J.; Tittmann, Bernhard R.
2018-04-01
Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.
Robust spin-current injection in lateral spin valves with two-terminal Co2FeSi spin injectors
NASA Astrophysics Data System (ADS)
Oki, S.; Kurokawa, T.; Honda, S.; Yamada, S.; Kanashima, T.; Itoh, H.; Hamaya, K.
2017-05-01
We demonstrate generation and detection of pure spin currents by combining a two-terminal spin-injection technique and Co2FeSi (CFS) spin injectors in lateral spin valves (LSVs). We find that the two-terminal spin injection with CFS has the robust dependence of the nonlocal spin signals on the applied bias currents, markedly superior to the four-terminal spin injection with permalloy reported previously. In our LSVs, since the spin transfer torque from one CFS injector to another CFS one is large, the nonlocal magnetoresistance with respect to applied magnetic fields shows large asymmetry in high bias-current conditions. For utilizing multi-terminal spin injection with CFS as a method for magnetization reversals, the terminal arrangement of CFS spin injectors should be taken into account.
Gearbox damage identification and quantification using stochastic resonance
NASA Astrophysics Data System (ADS)
Mba, Clement U.; Marchesiello, Stefano; Fasana, Alessandro; Garibaldi, Luigi
2018-03-01
Amongst the many new tools used for vibration based mechanical fault diagnosis in rotating machineries, stochastic resonance (SR) has been shown to be able to identify as well as quantify gearbox damage via numerical simulations. To validate the numerical simulation results that were obtained in a previous work by the authors, SR is applied in the present study to data from an experimental gearbox that is representative of an industrial gearbox. Both spur and helical gears are used in the gearbox setup. While the results of the direct application of SR to experimental data do not exactly corroborate the numerical simulation results, applying SR to experimental data in pre-processed form is shown to be quite effective. In addition, it is demonstrated that traditional statistical techniques used for gearbox diagnosis can be used as a reference to check how well SR performs.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
A Maneuvering Flight Noise Model for Helicopter Mission Planning
NASA Technical Reports Server (NTRS)
Greenwood, Eric; Rau, Robert; May, Benjamin; Hobbs, Christopher
2015-01-01
A new model for estimating the noise radiation during maneuvering flight is developed in this paper. The model applies the Quasi-Static Acoustic Mapping (Q-SAM) method to a database of acoustic spheres generated using the Fundamental Rotorcraft Acoustics Modeling from Experiments (FRAME) technique. A method is developed to generate a realistic flight trajectory from a limited set of waypoints and is used to calculate the quasi-static operating condition and corresponding acoustic sphere for the vehicle throughout the maneuver. By using a previously computed database of acoustic spheres, the acoustic impact of proposed helicopter operations can be rapidly predicted for use in mission-planning. The resulting FRAME-QS model is applied to near-horizon noise measurements collected for the Bell 430 helicopter undergoing transient pitch up and roll maneuvers, with good agreement between the measured data and the FRAME-QS model.
Room temperature synthesis of agarose/sol-gel glass pieces with tailored interconnected porosity.
Cabañas, M V; Peña, J; Román, J; Vallet-Regí, M
2006-09-01
An original shaping technique has been applied to prepare porous bodies at room temperature. Agarose, a biodegradable polysaccharide, was added as binder of a sol-gel glass in powder form, yielding an easy to mold paste. Interconnected tailored porous bodies can be straightforwardly prepared by pouring the slurry into a polymeric scaffold, previously designed by stereolitography, which is subsequently eliminated by alkaline dissolution at room temperature. The so obtained pieces behave like a hydrogel with an enhanced consistency that makes them machinable and easy to manipulate. These materials generate an apatite-like layer when immersed in a simulated body fluid, indicating a potential in vivo bioactivity. The proposed method can be applied to different powdered materials to produce pieces, at room temperature, with various shapes and sizes and with tailored interconnected porosity.
The use of the nominal group technique as an evaluative tool in medical undergraduate education.
Lloyd-Jones, G; Fowell, S; Bligh, J G
1999-01-01
In the present state of flux affecting UK medical undergraduate education, there is a pressing need for evaluative methods which will identify relevant outcomes both expected and unanticipated. The student perspective is now legitimately accepted to form part of any evaluative exercise but qualitative methods commonly used for this purpose are expensive in time and analytical skills. The nominal group technique (NGT) has been used for various purposes, including course evaluation, and appears well suited to this application. It combines qualitative and quantitative components in a structured interaction, which minimizes the influences of the researcher, and of group dynamics. The sequence and mechanics of the NGT process are described as applied to an end of first year evaluation in a novel undergraduate course. Doubts have been raised as to whether the results of NGT can be generalized to the larger group. In this paper, this problem is overcome by compiling a questionnaire based on the NGT items which was distributed throughout the class. Nominal group technique with questionnaire development. The medical school at The University of Liverpool. Medical students. Previous claims made on behalf of the NGT, such as the focus on the student voice, the minimizing of leadership influence and the richness of the data, are upheld in this report. Broad agreement was found with the NGT items but two items (10%) did not display any consensus. The questionnaire extension of the NGT provides back-up evidence of the reliability of the data derived from the technique and enables it to be applied to the larger groups typical of undergraduate medicine.
Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions
NASA Astrophysics Data System (ADS)
Factor, Samuel M.; Kraus, Adam L.
2017-01-01
Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed my own faint companion detection pipeline which utilizes an Bayesian analysis of kernel-phases. I have used this pipeline to search for new companions in archival images from HST/NICMOS in order to constrain planet and binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.
Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions
NASA Astrophysics Data System (ADS)
Factor, Samuel
2016-10-01
Direct detection of close in companions (binary systems or exoplanets) is notoriously difficult. While chronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast. While non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, the mask discards 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM though utilizing the full aperture. Instead of closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I propose to develop my own faint companion detection pipeline which utilizes an MCMC analysis of kernel-phases. I will search for new companions in archival images from NIC1 and ACS/HRC in order to constrain binary and planet formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical l/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. This technique can easily be applied to archival data as no mask is needed and will thus make the detection of close in companions cheap and simple as no additional observations are needed. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time.
NASA Astrophysics Data System (ADS)
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Ambient Noise Tomography of central Java, with Transdimensional Bayesian Inversion
NASA Astrophysics Data System (ADS)
Zulhan, Zulfakriza; Saygin, Erdinc; Cummins, Phil; Widiyantoro, Sri; Nugraha, Andri Dian; Luehr, Birger-G.; Bodin, Thomas
2014-05-01
Delineating the crustal structure of central Java is crucial for understanding its complex tectonic setting. However, seismic imaging of the strong heterogeneity typical of such a tectonically active region can be challenging, particularly in the upper crust where velocity contrasts are strongest and steep body wave ray-paths provide poor resolution. We have applied ambient noise cross correlation of pair stations in central Java, Indonesia by using the MERapi Amphibious EXperiment (MERAMEX) dataset. The data were collected between May to October 2004. We used 120 of 134 temporary seismic stations for about 150 days of observation, which covered central Java. More than 5000 Rayleigh wave Green's function were extracted by cross-correlating the noise simultaneously recorded at available station pairs. We applied a fully nonlinear 2D Bayesian inversion technique to the retrieved travel times. Features in the derived tomographic images correlate well with previous studies, and some shallow structures that were not evident in previous studies are clearly imaged with Ambient Noise Tomography. The Kendeng Basin and several active volcanoes appear with very low group velocities, and anomalies with relatively high velocities can be interpreted in terms of crustal sutures and/or surface geological features.
Retreatment Predictions in Odontology by means of CBR Systems.
Campo, Livia; Aliaga, Ignacio J; De Paz, Juan F; García, Alvaro Enrique; Bajo, Javier; Villarubia, Gabriel; Corchado, Juan M
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising.
Retreatment Predictions in Odontology by means of CBR Systems
Campo, Livia; Aliaga, Ignacio J.; García, Alvaro Enrique; Villarubia, Gabriel; Corchado, Juan M.
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising. PMID:26884749
On-road and wind-tunnel measurement of motorcycle helmet noise.
Kennedy, J; Carley, M; Walker, I; Holt, N
2013-09-01
The noise source mechanisms involved in motorcycling include various aerodynamic sources and engine noise. The problem of noise source identification requires extensive data acquisition of a type and level that have not previously been applied. Data acquisition on track and on road are problematic due to rider safety constraints and the portability of appropriate instrumentation. One way to address this problem is the use of data from wind tunnel tests. The validity of these measurements for noise source identification must first be demonstrated. In order to achieve this extensive wind tunnel tests have been conducted and compared with the results from on-track measurements. Sound pressure levels as a function of speed were compared between on track and wind tunnel tests and were found to be comparable. Spectral conditioning techniques were applied to separate engine and wind tunnel noise from aerodynamic noise and showed that the aerodynamic components were equivalent in both cases. The spectral conditioning of on-track data showed that the contribution of engine noise to the overall noise is a function of speed and is more significant than had previously been thought. These procedures form a basis for accurate experimental measurements of motorcycle noise.
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Horne, William C.
2015-01-01
An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.
Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita
2017-11-27
We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.
Novel jet observables from machine learning
NASA Astrophysics Data System (ADS)
Datta, Kaustuv; Larkoski, Andrew J.
2018-03-01
Previous studies have demonstrated the utility and applicability of machine learning techniques to jet physics. In this paper, we construct new observables for the discrimination of jets from different originating particles exclusively from information identified by the machine. The approach we propose is to first organize information in the jet by resolved phase space and determine the effective N -body phase space at which discrimination power saturates. This then allows for the construction of a discrimination observable from the N -body phase space coordinates. A general form of this observable can be expressed with numerous parameters that are chosen so that the observable maximizes the signal vs. background likelihood. Here, we illustrate this technique applied to discrimination of H\\to b\\overline{b} decays from massive g\\to b\\overline{b} splittings. We show that for a simple parametrization, we can construct an observable that has discrimination power comparable to, or better than, widely-used observables motivated from theory considerations. For the case of jets on which modified mass-drop tagger grooming is applied, the observable that the machine learns is essentially the angle of the dominant gluon emission off of the b\\overline{b} pair.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Nanoscale Chemical Imaging of Zeolites Using Atom Probe Tomography.
Weckhuysen, Bert Marc; Schmidt, Joel; Peng, Linqing; Poplawsky, Jonathan
2018-05-02
Understanding structure-composition-property relationships in zeolite-based materials is critical to engineering improved solid catalysts. However, this can be difficult to realize as even single zeolite crystals can exhibit heterogeneities spanning several orders of magnitude, with consequences for e.g. reactivity, diffusion as well as stability. Great progress has been made in characterizing these porous solids using tomographic techniques, though each method has an ultimate spatial resolution limitation. Atom Probe Tomography (APT) is the only technique so far capable of producing 3-D compositional reconstructions with sub-nm-scale resolution, and has only recently been applied to zeolite-based catalysts. Herein, we discuss the use of APT to study zeolites, including the critical aspects of sample preparation, data collection, assignment of mass spectral peaks including the predominant CO peak, the limitations of spatial resolution for the recovery of crystallographic information, and proper data analysis. All sections are illustrated with examples from recent literature, as well as previously unpublished data and analyses to demonstrate practical strategies to overcome potential pitfalls in applying APT to zeolites, thereby highlighting new insights gained from the APT method. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Investigating the Feasibility of Utilizing Carbon Nanotube Fibers for Spacesuit Dust Mitigation
NASA Technical Reports Server (NTRS)
Manyapu, Kavya K.; de Leon, Pablo; Peltz, Leora; Tsentalovich, Dmitri; Gaier, James R.; Calle, Carlos; Mackey, Paul
2016-01-01
Historical data from the Apollo missions has compelled NASA to identify dust mitigation of spacesuits and other components as a critical path prior to sending humans on potential future lunar exploration missions. Several studies thus far have proposed passive and active countermeasures to address this challenge. However, these technologies have been primarily developed and proven for rigid surfaces such as solar cells and thermal radiators. Integration of these technologies for spacesuit dust mitigation has remained an open challenge due to the complexity of suit design. Current research investigates novel methods to enhance integration of the Electrodynamic Dust Shield (EDS) concept for spacesuits. We leverage previously proven EDS concept developed by NASA for rigid surfaces and apply new techniques to integrate the technology into spacesuits to mitigate dust contamination. The study specifically examines the feasibility of utilizing Carbon Nanotube (CNT) yarns manufactured by Rice University as electrodes in spacesuit material. Proof of concept testing was conducted at NASA Kennedy Space Center using lunar regolith simulant to understand the feasibility of the proposed techniques for spacesuit application. Results from the experiments are detailed in this paper. Potential challenges of applying this technology for spacesuits are also identified.
Janas, Christine; Mast, Marc-Phillip; Kirsamer, Li; Angioni, Carlo; Gao, Fiona; Mäntele, Werner; Dressman, Jennifer; Wacker, Matthias G
2017-06-01
The dispersion releaser (DR) is a dialysis-based setup for the analysis of the drug release from nanosized drug carriers. It is mounted into dissolution apparatus2 of the United States Pharmacopoeia. The present study evaluated the DR technique investigating the drug release of the model compound flurbiprofen from drug solution and from nanoformulations composed of the drug and the polymer materials poly (lactic acid), poly (lactic-co-glycolic acid) or Eudragit®RSPO. The drug loaded nanocarriers ranged in size between 185.9 and 273.6nm and were characterized by a monomodal size distribution (PDI<0.1). The membrane permeability constants of flurbiprofen were calculated and mathematical modeling was applied to obtain the normalized drug release profiles. For comparing the sensitivities of the DR and the dialysis bag technique, the differences in the membrane permeation rates were calculated. Finally, different formulation designs of flurbiprofen were sensitively discriminated using the DR technology. The mechanism of drug release from the nanosized carriers was analyzed by applying two mathematical models described previously, the reciprocal powered time model and the three parameter model. Copyright © 2017 Elsevier B.V. All rights reserved.
Low-Resolution Radial-Velocity Monitoring of Pulsating sdBs in the Kepler Field
NASA Astrophysics Data System (ADS)
Telting, J.; Östensen, R.; Reed, M.; Kiæerad, F.; Farris, L.; Baran, A.; Oreiro, R.; O'Toole, S.
2014-04-01
We present preliminary results from an ongoing spectroscopic campaign to uncover the binary status of the 18 known pulsating subdwarf B stars and the one pulsating BHB star observed with the Kepler spacecraft. During the 2010-2012 observing seasons, we have used the KP4m Mayall, NOT, and WHT telescopes to obtain low-resolution (R˜2000-2500) Balmer-line spectroscopy of our sample stars. We applied a standard cross-correlation technique to derive radial velocities, and find clear evidence for binarity in several of the pulsators, some of which were not previously known to be binaries.
Symmetry analysis of trimers rovibrational spectra: the case of Ne3★
NASA Astrophysics Data System (ADS)
Márquez-Mijares, Maykel; Roncero, Octavio; Villarreal, Pablo; González-Lezana, Tomás
2018-05-01
An approximate method to assign the symmetry to the rovibrational spectrum of homonuclear trimers based on the solution of the rotational Hamiltonian by means of a purely vibrational basis combined with standard rotational functions is applied on Ne3. The neon trimer constitutes an ideal test between heavier systems such as Ar3 for which the method proves to be an extremely useful technique and some other previously investigated cases such as H3 + where some limitations were observed. Comparisons of the calculated rovibrational energy levels are established with results from different calculations reported in the literature.
System-level perturbations of cell metabolism using CRISPR/Cas9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakočiūnas, Tadas; Jensen, Michael K.; Keasling, Jay D.
CRISPR/Cas9 (clustered regularly interspaced palindromic repeats and the associated protein Cas9) techniques have made genome engineering and transcriptional reprogramming studies much more advanced and cost-effective. For metabolic engineering purposes, the CRISPR-based tools have been applied to single and multiplex pathway modifications and transcriptional regulations. The effectiveness of these tools allows researchers to implement genome-wide perturbations, test model-guided genome editing strategies, and perform transcriptional reprogramming perturbations in a more advanced manner than previously possible. In this mini-review we highlight recent studies adopting CRISPR/Cas9 for systems-level perturbations and model-guided metabolic engineering.
The ensemble switch method for computing interfacial tensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitz, Fabian; Virnau, Peter
2015-04-14
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.
Computer program for fast Karhunen Loeve transform algorithm
NASA Technical Reports Server (NTRS)
Jain, A. K.
1976-01-01
The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.
Sitepu, Monika S; Kaewkungwal, Jaranit; Luplerdlop, Nathanej; Soonthornworasiri, Ngamphol; Silawan, Tassanee; Poungsombat, Supawadee; Lawpoolsri, Saranath
2013-03-01
This study aimed to describe the temporal patterns of dengue transmission in Jakarta from 2001 to 2010, using data from the national surveillance system. The Box-Jenkins forecasting technique was used to develop a seasonal autoregressive integrated moving average (SARIMA) model for the study period and subsequently applied to forecast DHF incidence in 2011 in Jakarta Utara, Jakarta Pusat, Jakarta Barat, and the municipalities of Jakarta Province. Dengue incidence in 2011, based on the forecasting model was predicted to increase from the previous year.
NASA Technical Reports Server (NTRS)
1991-01-01
A collaboration between NASA Lewis Research Center (LRC) and Gladwin Engineering resulted in the adaptation of aerospace high temperature metal technology to the continuous casting of steel. The continuous process is more efficient because it takes less time and labor. A high temperature material, once used on the X-15 research plane, was applied to metal rollers by a LRC developed spraying technique. Lewis Research Center also supplied mold prototype of metal composites, reducing erosion and promoting thermal conductivity. Rollers that previously cracked due to thermal fatigue, lasted longer. Gladwin's sales have increased, and additional NASA-developed innovations are anticipated.
How best to assess suppression in patients with high anisometropia.
Li, Jinrong; Hess, Robert F; Chan, Lily Y L; Deng, Daming; Chen, Xiang; Yu, Minbin; Thompson, Benjamin S
2013-02-01
We have recently described a rapid technique for measuring suppression using a dichoptic signal/noise task. Here, we report a modification of this technique that allows for accurate measurements to be made in amblyopic patients with high levels of anisometropia. This was necessary because aniseikonic image size differences between the two eyes can provide a cue for signal/noise segregation and, therefore, influence suppression measurement in these patients. Suppression was measured using our original technique and with a modified technique whereby the size of the signal and noise elements was randomized across the stimulus to eliminate size differences as a cue for task performance. Eleven patients with anisometropic amblyopia, five with more than 5 diopters (D) spherical equivalent difference (SED), six with less than 5 D SED between the eyes, and 10 control observers completed suppression measurements using both techniques. Suppression measurements in controls and patients with less than 5 D SED were constant across the two techniques; however, patients with more than 5 D SED showed significantly stronger suppression on the modified technique with randomized element size. Measurements made with the modified technique correlated with the loss of visual acuity in the amblyopic eye and were in good agreement with previous reports using detailed psychophysical measurements. The signal/noise technique for measuring suppression can be applied to patients with high levels of anisometropia and aniseikonia if element size is randomized. In addition, deeper suppression is associated with a greater loss of visual acuity in patients with anisometropic amblyopia.
Advanced DPSM approach for modeling ultrasonic wave scattering in an arbitrary geometry
NASA Astrophysics Data System (ADS)
Yadav, Susheel K.; Banerjee, Sourav; Kundu, Tribikram
2011-04-01
Several techniques are used to diagnose structural damages. In the ultrasonic technique structures are tested by analyzing ultrasonic signals scattered by damages. The interpretation of these signals requires a good understanding of the interaction between ultrasonic waves and structures. Therefore, researchers need analytical or numerical techniques to have a clear understanding of the interaction between ultrasonic waves and structural damage. However, modeling of wave scattering phenomenon by conventional numerical techniques such as finite element method requires very fine mesh at high frequencies necessitating heavy computational power. Distributed point source method (DPSM) is a newly developed robust mesh free technique to simulate ultrasonic, electrostatic and electromagnetic fields. In most of the previous studies the DPSM technique has been applied to model two dimensional surface geometries and simple three dimensional scatterer geometries. It was difficult to perform the analysis for complex three dimensional geometries. This technique has been extended to model wave scattering in an arbitrary geometry. In this paper a channel section idealized as a thin solid plate with several rivet holes is formulated. The simulation has been carried out with and without cracks near the rivet holes. Further, a comparison study has been also carried out to characterize the crack. A computer code has been developed in C for modeling the ultrasonic field in a solid plate with and without cracks near the rivet holes.
Bujar, Magdalena; McAuslane, Neil; Walker, Stuart R.; Salek, Sam
2017-01-01
Introduction: Although pharmaceutical companies, regulatory authorities, and health technology assessment (HTA) agencies have been increasingly using decision-making frameworks, it is not certain whether these enable better quality decision making. This could be addressed by formally evaluating the quality of decision-making process within those organizations. The aim of this literature review was to identify current techniques (tools, questionnaires, surveys, and studies) for measuring the quality of the decision-making process across the three stakeholders. Methods: Using MEDLINE, Web of Knowledge, and other Internet-based search engines, a literature review was performed to systematically identify techniques for assessing quality of decision making in medicines development, regulatory review, and HTA. A structured search was applied using key words and a secondary review was carried out. In addition, the measurement properties of each technique were assessed and compared. Ten Quality Decision-Making Practices (QDMPs) developed previously were then used as a framework for the evaluation of techniques identified in the review. Due to the variation in studies identified, meta-analysis was inappropriate. Results: This review identified 13 techniques, where 7 were developed specifically to assess decision making in medicines' development, regulatory review, or HTA; 2 examined corporate decision making, and 4 general decision making. Regarding how closely each technique conformed to the 10 QDMPs, the 13 techniques assessed a median of 6 QDMPs, with a mode of 3 QDMPs. Only 2 techniques evaluated all 10 QDMPs, namely the Organizational IQ and the Quality of Decision Making Orientation Scheme (QoDoS), of which only one technique, QoDoS could be applied to assess decision making of both individuals and organizations, and it possessed generalizability to capture issues relevant to companies as well as regulatory authorities. Conclusion: This review confirmed a general paucity of research in this area, particularly regarding the development and systematic application of techniques for evaluating quality decision making, with no consensus around a gold standard. This review has identified QoDoS as the most promising available technique for assessing decision making in the lifecycle of medicines and the next steps would be to further test its validity, sensitivity, and reliability. PMID:28443022
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
Salk, Carl F; Frey, Ulrich; Rusch, Hannes
2014-01-01
Communities, policy actors and conservationists benefit from understanding what institutions and land management regimes promote ecosystem services like carbon sequestration and biodiversity conservation. However, the definition of success depends on local conditions. Forests' potential carbon stock, biodiversity and rate of recovery following disturbance are known to vary with a broad suite of factors including temperature, precipitation, seasonality, species' traits and land use history. Methods like tracking over-time changes within forests, or comparison with "pristine" reference forests have been proposed as means to compare the structure and biodiversity of forests in the face of underlying differences. However, data from previous visits or reference forests may be unavailable or costly to obtain. Here, we introduce a new metric of locally weighted forest intercomparison to mitigate the above shortcomings. This method is applied to an international database of nearly 300 community forests and compared with previously published techniques. It is particularly suited to large databases where forests may be compared among one another. Further, it avoids problematic comparisons with old-growth forests which may not resemble the goal of forest management. In most cases, the different methods produce broadly congruent results, suggesting that researchers have the flexibility to compare forest conditions using whatever type of data is available. Forest structure and biodiversity are shown to be independently measurable axes of forest condition, although users' and foresters' estimations of seemingly unrelated attributes are highly correlated, perhaps reflecting an underlying sentiment about forest condition. These findings contribute new tools for large-scale analysis of ecosystem condition and natural resource policy assessment. Although applied here to forestry, these techniques have broader applications to classification and evaluation problems using crowdsourced or repurposed data for which baselines or external validations are not available.
Salk, Carl F.; Frey, Ulrich; Rusch, Hannes
2014-01-01
Communities, policy actors and conservationists benefit from understanding what institutions and land management regimes promote ecosystem services like carbon sequestration and biodiversity conservation. However, the definition of success depends on local conditions. Forests' potential carbon stock, biodiversity and rate of recovery following disturbance are known to vary with a broad suite of factors including temperature, precipitation, seasonality, species' traits and land use history. Methods like tracking over-time changes within forests, or comparison with “pristine” reference forests have been proposed as means to compare the structure and biodiversity of forests in the face of underlying differences. However, data from previous visits or reference forests may be unavailable or costly to obtain. Here, we introduce a new metric of locally weighted forest intercomparison to mitigate the above shortcomings. This method is applied to an international database of nearly 300 community forests and compared with previously published techniques. It is particularly suited to large databases where forests may be compared among one another. Further, it avoids problematic comparisons with old-growth forests which may not resemble the goal of forest management. In most cases, the different methods produce broadly congruent results, suggesting that researchers have the flexibility to compare forest conditions using whatever type of data is available. Forest structure and biodiversity are shown to be independently measurable axes of forest condition, although users' and foresters' estimations of seemingly unrelated attributes are highly correlated, perhaps reflecting an underlying sentiment about forest condition. These findings contribute new tools for large-scale analysis of ecosystem condition and natural resource policy assessment. Although applied here to forestry, these techniques have broader applications to classification and evaluation problems using crowdsourced or repurposed data for which baselines or external validations are not available. PMID:24743325
Development and verification of global/local analysis techniques for laminated composites
NASA Technical Reports Server (NTRS)
Thompson, Danniella Muheim; Griffin, O. Hayden, Jr.
1991-01-01
A two-dimensional to three-dimensional global/local finite element approach was developed, verified, and applied to a laminated composite plate of finite width and length containing a central circular hole. The resulting stress fields for axial compression loads were examined for several symmetric stacking sequences and hole sizes. Verification was based on comparison of the displacements and the stress fields with those accepted trends from previous free edge investigations and a complete three-dimensional finite element solution of the plate. The laminates in the compression study included symmetric cross-ply, angle-ply and quasi-isotropic stacking sequences. The entire plate was selected as the global model and analyzed with two-dimensional finite elements. Displacements along a region identified as the global/local interface were applied in a kinematically consistent fashion to independent three-dimensional local models. Local areas of interest in the plate included a portion of the straight free edge near the hole, and the immediate area around the hole. Interlaminar stress results obtained from the global/local analyses compares well with previously reported trends, and some new conclusions about interlaminar stress fields in plates with different laminate orientations and hole sizes are presented for compressive loading. The effectiveness of the global/local procedure in reducing the computational effort required to solve these problems is clearly demonstrated through examination of the computer time required to formulate and solve the linear, static system of equations which result for the global and local analyses to those required for a complete three-dimensional formulation for a cross-ply laminate. Specific processors used during the analyses are described in general terms. The application of this global/local technique is not limited software system, and was developed and described in as general a manner as possible.
Results and Error Estimates from GRACE Forward Modeling over Antarctica
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-04-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.
Damage Diagnosis in Semiconductive Materials Using Electrical Impedance Measurements
NASA Technical Reports Server (NTRS)
Ross, Richard W.; Hinton, Yolanda L.
2008-01-01
Recent aerospace industry trends have resulted in an increased demand for real-time, effective techniques for in-flight structural health monitoring. A promising technique for damage diagnosis uses electrical impedance measurements of semiconductive materials. By applying a small electrical current into a material specimen and measuring the corresponding voltages at various locations on the specimen, changes in the electrical characteristics due to the presence of damage can be assessed. An artificial neural network uses these changes in electrical properties to provide an inverse solution that estimates the location and magnitude of the damage. The advantage of the electrical impedance method over other damage diagnosis techniques is that it uses the material as the sensor. Simple voltage measurements can be used instead of discrete sensors, resulting in a reduction in weight and system complexity. This research effort extends previous work by employing finite element method models to improve accuracy of complex models with anisotropic conductivities and by enhancing the computational efficiency of the inverse techniques. The paper demonstrates a proof of concept of a damage diagnosis approach using electrical impedance methods and a neural network as an effective tool for in-flight diagnosis of structural damage to aircraft components.
A novel background field removal method for MRI using projection onto dipole fields (PDF).
Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi
2011-11-01
For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230
Illias, Hazlee Azil; Zhao Liang, Wee
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.
Application of Function-Failure Similarity Method to Rotorcraft Component Design
NASA Technical Reports Server (NTRS)
Roberts, Rory A.; Stone, Robert E.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)
2002-01-01
Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the designs that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. During the design of aircraft, a general technique is needed to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to specific components, which are described by their functionality. The failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using this technique, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. The fundamentals of this method were previously introduced for a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.
ProteinAC: a frequency domain technique for analyzing protein dynamics
NASA Astrophysics Data System (ADS)
Bozkurt Varolgunes, Yasemin; Demir, Alper
2018-03-01
It is widely believed that the interactions of proteins with ligands and other proteins are determined by their dynamic characteristics as opposed to only static, time-invariant processes. We propose a novel computational technique, called ProteinAC (PAC), that can be used to analyze small scale functional protein motions as well as interactions with ligands directly in the frequency domain. PAC was inspired by a frequency domain analysis technique that is widely used in electronic circuit design, and can be applied to both coarse-grained and all-atom models. It can be considered as a generalization of previously proposed static perturbation-response methods, where the frequency of the perturbation becomes the key. We discuss the precise relationship of PAC to static perturbation-response schemes. We show that the frequency of the perturbation may be an important factor in protein dynamics. Perturbations at different frequencies may result in completely different response behavior while magnitude and direction are kept constant. Furthermore, we introduce several novel frequency dependent metrics that can be computed via PAC in order to characterize response behavior. We present results for the ferric binding protein that demonstrate the potential utility of the proposed techniques.
Laparoscopic revision of failed antireflux operations.
Serafini, F M; Bloomston, M; Zervos, E; Muench, J; Albrink, M H; Murr, M; Rosemurgy, A S
2001-01-01
A small number of patients fail fundoplication and require reoperation. Laparoscopic techniques have been applied to reoperative fundoplications. We reviewed our experience with reoperative laparoscopic fundoplication. Reoperative laparoscopic fundoplication was undertaken in 28 patients, 19 F and 9 M, of mean age 56 years +/- 12. Previous antireflux procedures included 19 open and 12 laparoscopic antireflux operations. Symptoms were heartburn (90%), dysphagia (35%), and atypical symptoms (30%%). The mean interval from antireflux procedure to revision was 13 months +/- 4.2. The mean DeMeester score was 78+/-32 (normal 14.7). Eighteen patients (64%) had hiatal breakdown, 17 (60%) had wrap failure, 2 (7%) had slipped Nissen, 3 (11%) had paraesophageal hernias, and 1 (3%) had an excessively tight wrap. Twenty-five revisions were completed laparoscopically, while 3 patients required conversion to the open technique. Complications occurred in 9 of 17 (53%) patients failing previous open fundoplications and in 4 of 12 patients (33%) failing previous laparoscopic fundoplications and included 15 gastrotomies and 1 esophagotomy, all repaired laparoscopically, 3 postoperative gastric leaks, and 4 pneumothoraces requiring tube thoracostomy. No deaths occurred. Median length of stay was 5 days (range 2-90 days). At a mean follow-up of 20 months +/- 17, 2 patients (7%) have failed revision of their fundoplications, with the rest of the patients being essentially asymptomatic (93%). The results achieved with reoperative laparoscopic fundoplication are similar to those of primary laparoscopic fundoplications. Laparoscopic reoperations, particularly of primary open fundoplication, can be technically challenging and fraught with complications. Copyright 2001 Academic Press.
Holland, Heidrun; Ahnert, Peter; Koschny, Ronald; Kirsten, Holger; Bauer, Manfred; Schober, Ralf; Meixensberger, Jürgen; Fritzsch, Dominik; Krupp, Wolfgang
2012-06-15
Astrocytomas represent the largest and most common subgroup of brain tumors. Anaplastic astrocytoma (WHO grade III) may arise from low-grade diffuse astrocytoma (WHO grade II) or as primary tumors without any precursor lesion. Comprehensive analyses of anaplastic astrocytomas combining both cytogenetic and molecular cytogenetic techniques are rare. Therefore, we analyzed genomic alterations of five anaplastic astrocytomas using high-density single nucleotide polymorphism arrays combined with GTG-banding and FISH-techniques. By cytogenetics, we found 169 structural chromosomal aberrations most frequently involving chromosomes 1, 2, 3, 4, 10, and 12, including two not previously described alterations, a nonreciprocal translocation t(3;11)(p12;q13), and one interstitial chromosomal deletion del(2)(q21q31). Additionally, we detected previously not documented loss of heterozygosity (LOH) without copy number changes in 4/5 anaplastic astrocytomas on chromosome regions 5q11.2, 5q22.1, 6q21, 7q21.11, 7q31.33, 8q11.22, 14q21.1, 17q21.31, and 17q22, suggesting segmental uniparental disomy (UPD), applying high-density single nucleotide polymorphism arrays. UPDs are currently considered to play an important role in the initiation and progression of different malignancies. The significance of previously not described genetic alterations in anaplastic astrocytomas presented here needs to be confirmed in a larger series. Copyright © 2012 Elsevier GmbH. All rights reserved.
Morelato, Marie; Beavis, Alison; Tahtouh, Mark; Ribaux, Olivier; Kirkbride, Paul; Roux, Claude
2014-01-01
Traditional forensic drug profiling involves numerous analytical techniques, and the whole process is typically costly and may be time consuming. The aim of this study was to investigate the possibility of prioritising techniques utilised at the Australian Federal Police (AFP) for the chemical profiling of 3,4-methylenedioxymethylamphetamine (MDMA). The outcome would provide the AFP with the ability to obtain more timely and valuable results that could be used in an intelligence perspective. Correlation coefficients were used to obtain a similarity degree between a population of linked samples (within seizures) and a population of unlinked samples (between different seizures) and discrimination between the two populations was ultimately achieved. The results showed that gas chromatography-mass spectrometry (GC-MS) was well suited as a single technique to detect links between seizures and could be used in priority for operational intelligence purposes. Furthermore, the method was applied to seizures known or suspected (through their case information) to be linked to each other to assess the chemical similarity between samples. It was found that half of the seizures previously linked by the case number were also linked by the chemical profile. This procedure was also able to highlight links between cases that were previously unsuspected and retrospectively confirmed by circumstantial information. The findings are finally discussed in the broader forensic intelligence context, with a focus on how they could be successfully incorporated into investigations and in an intelligence-led policing perspective in order to understand trafficking markets. © 2014.
NASA Astrophysics Data System (ADS)
Morton, Kenneth D., Jr.; Torrione, Peter A.; Collins, Leslie
2011-05-01
Laser induced breakdown spectroscopy (LIBS) can provide rapid, minimally destructive, chemical analysis of substances with the benefit of little to no sample preparation. Therefore, LIBS is a viable technology for the detection of substances of interest in near real-time fielded remote sensing scenarios. Of particular interest to military and security operations is the detection of explosive residues on various surfaces. It has been demonstrated that LIBS is capable of detecting such residues, however, the surface or substrate on which the residue is present can alter the observed spectra. Standard chemometric techniques such as principal components analysis and partial least squares discriminant analysis have previously been applied to explosive residue detection, however, the classification techniques developed on such data perform best against residue/substrate pairs that were included in model training but do not perform well when the residue/substrate pairs are not in the training set. Specifically residues in the training set may not be correctly detected if they are presented on a previously unseen substrate. In this work, we explicitly model LIBS spectra resulting from the residue and substrate to attempt to separate the response from each of the two components. This separation process is performed jointly with classifier design to ensure that the classifier that is developed is able to detect residues of interest without being confused by variations in the substrates. We demonstrate that the proposed classification algorithm provides improved robustness to variations in substrate compared to standard chemometric techniques for residue detection.
Strong Langmuir Turbulence and Four-Wave Mixing
NASA Astrophysics Data System (ADS)
Glanz, James
1991-02-01
The staircase expansion is a new mathematical technique for deriving reduced, nonlinear-PDE descriptions from the plasma-moment equations. Such descriptions incorporate only the most significant linear and nonlinear terms of more complex systems. The technique is used to derive a set of Dawson-Zakharov or "master" equations, which unify and generalize previous work and show the limitations of models commonly used to describe nonlinear plasma waves. Fundamentally new wave-evolution equations are derived that admit of exact nonlinear solutions (solitary waves). Analytic calculations illustrate the competition between well-known effects of self-focusing, which require coupling to ion motion, and pure-electron nonlinearities, which are shown to be especially important in curved geometries. Also presented is an N -moment hydrodynamic model derived from the Vlasov equation. In this connection, the staircase expansion is shown to remain useful for all values of N >= 3. The relevance of the present work to nonlocally truncated hierarchies, which more accurately model dissipation, is briefly discussed. Finally, the general formalism is applied to the problem of electromagnetic emission from counterpropagating Langmuir pumps. It is found that previous treatments have neglected order-unity effects that increase the emission significantly. Detailed numerical results are presented to support these conclusions. The staircase expansion--so called because of its appearance when written out--should be effective whenever the largest contribution to the nonlinear wave remains "close" to some given frequency. Thus the technique should have application to studies of wake-field acceleration schemes and anomalous damping of plasma waves.
The effect of hydrostatic vs. shock pressure treatment on plant seeds
NASA Astrophysics Data System (ADS)
Mustey, Adrian; Leighs, James; Appleby-Thomas, Gareth; Wood, David; Hazael, Rachael; McMillan, Paul; Hazell, Paul
2013-06-01
The hydrostatic pressure and shock response of plant seeds have both been previously investigated (primarily driven by an interest in reducing bacterial contamination of crops and the theory of panspermia respectively). However, comparisons have not previously been made between these two methods of applying pressure to plant seeds. Here such a comparison has been undertaken based on the premise that any correlations in such data may provide a route to inform understanding of damage mechanisms in the seeds under test. In this work two varieties of plant seeds were subjected to hydrostatic pressure via a non-end-loaded piston cylinder set-up and shock compression via employment of a 50-mm bore, single stage gas gun using the flyer-plate technique. Results from germination tests of recovered seed samples have been compared and contrasted, and initial conclusions made regarding causes of trends in the resultant data-set.
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
NASA Astrophysics Data System (ADS)
Michalicek, M. Adrian; Comtois, John H.; Schriner, Heather K.
1998-04-01
This paper describes the design and characterization of several types of micromirror devices to include process capabilities, device modeling, and test data resulting in deflection versus applied potential curves and surface contour measurements. These devices are the first to be fabricated in the state-of-the-art four-level planarized polysilicon process available at Sandia National Laboratories known as the Sandia Ultra-planar Multi-level MEMS Technology. This enabling process permits the development of micromirror devices with near-ideal characteristics which have previously been unrealizable in standard three-layer polysilicon processes. This paper describes such characteristics which have previously been unrealizable in standard three-layer polysilicon processes. This paper describes such characteristics as elevated address electrodes, various address wiring techniques, planarized mirror surfaces suing Chemical Mechanical Polishing, unique post-process metallization, and the best active surface area to date.
Implications of memory modulation for post-traumatic stress and fear disorders
Parsons, Ryan G; Ressler, Kerry J
2013-01-01
Post-traumatic stress disorder, panic disorder and phobia manifest in ways that are consistent with an uncontrollable state of fear. Their development involves heredity, previous sensitizing experiences, association of aversive events with previous neutral stimuli, and inability to inhibit or extinguish fear after it is chronic and disabling. We highlight recent progress in fear learning and memory, differential susceptibility to disorders of fear, and how these findings are being applied to the understanding, treatment and possible prevention of fear disorders. Promising advances are being translated from basic science to the clinic, including approaches to distinguish risk versus resilience before trauma exposure, methods to interfere with fear development during memory consolidation after a trauma, and techniques to inhibit fear reconsolidation and to enhance extinction of chronic fear. It is hoped that this new knowledge will translate to more successful, neuroscientifically informed and rationally designed approaches to disorders of fear regulation. PMID:23354388
Bilateral Distal Femoral Nailing in a Rare Symmetrical Periprosthetic Knee Fracture
Carvalho, Marcos; Fonseca, Ruben; Simões, Pedro; Bahute, André; Mendonça, António; Fonseca, Fernando
2014-01-01
The authors report a case of a 78-year-old polytrauma patient, with severe thoracic trauma and bilateral symmetrical periprosthetic femoral fractures after a violent car accident. After the primary survey, with the thoracic trauma stabilized, neurovascular lesions excluded, and provisional immobilization applied, both fractures were classified as OTA: 33-A3, Rorabeck Type II, and closed reduction and internal fixation with distal femoral nails were performed. At 5 months of follow-up, the patient was able to walk with crutches and clear radiologic signs of fracture consolidation could be seen. At 24 months, the patient walked without any walking aid and had recovered her previous functional status. This surgical option allowed the authors to achieve relative stability using an intramedullary technique, preserving fracture hematoma in an osteopenic patient, and was found to be successful in recovering the patient's previous functional status and satisfaction after major trauma. PMID:25580332
Magrin, Gabriel Leonardo; Sigua-Rodriguez, Eder Alberto; Goulart, Douglas Rangel; Asprino, Luciana
2015-01-01
The piezosurgery has been used with increasing frequency and applicability by health professionals, especially those who deal with dental implants. The concept of piezoelectricity has emerged in the nineteenth century, but it was applied in oral surgery from 1988 by Tomaso Vercellotti. It consists of an ultrasonic device able to cut mineralized bone tissue, without injuring the adjacent soft tissue. It also has several advantages when compared to conventional techniques with drills and saws, such as the production of a precise, clean and low bleed bone cut that shows positive biological results. In dental implants surgery, it has been used for maxillary sinus lifting, removal of bone blocks, distraction osteogenesis, lateralization of the inferior alveolar nerve, split crest of alveolar ridge and even for dental implants placement. The purpose of this paper is to discuss the use of piezosurgery in bone augmentation procedures used previously to dental implants placement. PMID:26966469
Scale-up of ecological experiments: Density variation in the mobile bivalve Macomona liliana
Schneider, Davod C.; Walters, R.; Thrush, S.; Dayton, P.
1997-01-01
At present the problem of scaling up from controlled experiments (necessarily at a small spatial scale) to questions of regional or global importance is perhaps the most pressing issue in ecology. Most of the proposed techniques recommend iterative cycling between theory and experiment. We present a graphical technique that facilitates this cycling by allowing the scope of experiments, surveys, and natural history observations to be compared to the scope of models and theory. We apply the scope analysis to the problem of understanding the population dynamics of a bivalve exposed to environmental stress at the scale of a harbour. Previous lab and field experiments were found not to be 1:1 scale models of harbour-wide processes. Scope analysis allowed small scale experiments to be linked to larger scale surveys and to a spatially explicit model of population dynamics.
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; Russell, Samuel S.
2012-01-01
Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.
Detonation Properties Measurements for Inorganic Explosives
NASA Astrophysics Data System (ADS)
Morgan, Brent A.; Lopez, Angel
2005-03-01
Many commonly available explosive materials have never been quantitatively or theoretically characterized in a manner suitable for use in analytical models. This includes inorganic explosive materials used in spacecraft ordnance, such as zirconium potassium perchlorate (ZPP). Lack of empirical information about these materials impedes the development of computational techniques. We have applied high fidelity measurement techniques to experimentally determine the pressure and velocity characteristics of ZPP, a previously uncharacterized explosive material. Advances in measurement technology now permit the use of very small quantities of material, thus yielding a significant reduction in the cost of conducting these experiments. An empirical determination of the explosive behavior of ZPP derived a Hugoniot for ZPP with an approximate particle velocity (uo) of 1.0 km/s. This result compares favorably with the numerical calculations from the CHEETAH thermochemical code, which predicts uo of approximately 1.2 km/s under ideal conditions.
Modal parameter identification using the log decrement method and band-pass filters
NASA Astrophysics Data System (ADS)
Liao, Yabin; Wells, Valana
2011-10-01
This paper presents a time-domain technique for identifying modal parameters of test specimens based on the log-decrement method. For lightly damped multidegree-of-freedom or continuous systems, the conventional method is usually restricted to identification of fundamental-mode parameters only. Implementation of band-pass filters makes it possible for the proposed technique to extract modal information of higher modes. The method has been applied to a polymethyl methacrylate (PMMA) beam for complex modulus identification in the frequency range 10-1100 Hz. Results compare well with those obtained using the Least Squares method, and with those previously published in literature. Then the accuracy of the proposed method has been further verified by experiments performed on a QuietSteel specimen with very low damping. The method is simple and fast. It can be used for a quick estimation of the modal parameters, or as a complementary approach for validation purposes.
Postselection technique for quantum channels with applications to quantum cryptography.
Christandl, Matthias; König, Robert; Renner, Renato
2009-01-16
We propose a general method for studying properties of quantum channels acting on an n-partite system, whose action is invariant under permutations of the subsystems. Our main result is that, in order to prove that a certain property holds for an arbitrary input, it is sufficient to consider the case where the input is a particular de Finetti-type state, i.e., a state which consists of n identical and independent copies of an (unknown) state on a single subsystem. Our technique can be applied to the analysis of information-theoretic problems. For example, in quantum cryptography, we get a simple proof for the fact that security of a discrete-variable quantum key distribution protocol against collective attacks implies security of the protocol against the most general attacks. The resulting security bounds are tighter than previously known bounds obtained with help of the exponential de Finetti theorem.
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
NASA Technical Reports Server (NTRS)
Bamman, M. M.; Clarke, M. S.; Talmadge, R. J.; Feeback, D. L.
1999-01-01
Talmadge and Roy (J. Appl. Physiol. 1993, 75, 2337-2340) previously established a sodium dodecyl sulfate - polyacrylamide gel electrophoresis (SDS-PAGE) protocol for separating all four rat skeletal muscle myosin heavy chain (MHC) isoforms (MHC I, IIa, IIx, IIb); however, when applied to human muscle, the type II MHC isoforms (Ila, IIx) are not clearly distinguished. In this brief paper we describe a modification of the SDS-PAGE protocol which yields distinct and consistent separation of all three adult human MHC isoforms (MHC I, IIa, IIx) in a minigel system. MHC specificity of each band was confirmed by Western blot using three monoclonal IgG antibodies (mAbs) immunoreactive against MHCI (mAb MHCs, Novacastra Laboratories), MHCI+IIa (mAb BF-35), and MHCIIa+IIx (mAb SC-71). Results provide a valuable SDS-PAGE minigel technique for separating MHC isoforms in human muscle without the difficult task of casting gradient gels.
The Three-D Flow Structures of Gas and Liquid Generated by a Spreading Flame Over Liquid Fuel
NASA Technical Reports Server (NTRS)
Tashtoush, G.; Ito, A.; Konishi, T.; Narumi, A.; Saito, K.; Cremers, C. J.
1999-01-01
We developed a new experimental technique called: Combined laser sheet particle tracking (LSPT) and laser holographic interferometry (HI), which is capable of measuring the transient behavior of three dimensional structures of temperature and flow both in liquid and gas phases. We applied this technique to a pulsating flame spread over n-butanol. We found a twin vortex flow both on the liquid surface and deep in the liquid a few mm below the surface and a twin vortex flow in the gas phase. The first twin vortex flow at the liquid surface was observed previously by NASA Lewis researchers, while the last two observations are new. These observations revealed that the convective flow structure ahead of the flame leading edge is three dimensional in nature and the pulsating spread is controlled by the convective flow of both liquid and gas.
Modal identification of structures from the responses and random decrement signatures
NASA Technical Reports Server (NTRS)
Brahim, S. R.; Goglia, G. L.
1977-01-01
The theory and application of a method which utilizes the free response of a structure to determine its vibration parameters is described. The time-domain free response is digitized and used in a digital computer program to determine the number of modes excited, the natural frequencies, the damping factors, and the modal vectors. The technique is applied to a complex generalized payload model previously tested using sine sweep method and analyzed by NASTRAN. Ten modes of the payload model are identified. In case free decay response is not readily available, an algorithm is developed to obtain the free responses of a structure from its random responses, due to some unknown or known random input or inputs, using the random decrement technique without changing time correlation between signals. The algorithm is tested using random responses from a generalized payload model and from the space shuttle model.
Aboutaleb, Hamdy
2014-12-01
Penile amputation is a rare catastrophe and a serious complication of circumcision. Reconstruction of the glans penis may be indicated following amputation. Our report discusses a novel technique for reconfiguration of an amputated glans penis 1 year after a complicated circumcision. A 2-year-old male infant presented to us with glans penis amputation that had occurred during circumcision 1 year previously. The parents complained of severe meatal stenosis with disfigurement of the penis. Penis length was 3 cm. Complete penile degloving was performed. The distal part of the remaining penis was prepared by removing fibrous tissue. A buccal mucosal graft was applied to the distal part of the penis associated with meatotomy. The use of a buccal mucosal graft is a successful and simple procedure with acceptable cosmetic and functional results for late reconfiguration of the glans penis after amputation when penile size is suitable.
Probabilistic retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Wu, Chang-Hua; Agam, Gady
2007-03-01
Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.
Theta-burst microstimulation in the human entorhinal area improves memory specificity.
Titiz, Ali S; Hill, Michael R H; Mankin, Emily A; M Aghajan, Zahra; Eliashiv, Dawn; Tchemodanov, Natalia; Maoz, Uri; Stern, John; Tran, Michelle E; Schuette, Peter; Behnke, Eric; Suthana, Nanthia A; Fried, Itzhak
2017-10-24
The hippocampus is critical for episodic memory, and synaptic changes induced by long-term potentiation (LTP) are thought to underlie memory formation. In rodents, hippocampal LTP may be induced through electrical stimulation of the perforant path. To test whether similar techniques could improve episodic memory in humans, we implemented a microstimulation technique that allowed delivery of low-current electrical stimulation via 100 μm -diameter microelectrodes. As thirteen neurosurgical patients performed a person recognition task, microstimulation was applied in a theta-burst pattern, shown to optimally induce LTP. Microstimulation in the right entorhinal area during learning significantly improved subsequent memory specificity for novel portraits; participants were able both to recognize previously-viewed photos and reject similar lures. These results suggest that microstimulation with physiologic level currents-a radical departure from commonly used deep brain stimulation protocols-is sufficient to modulate human behavior and provides an avenue for refined interrogation of the circuits involved in human memory.
2014-01-01
Penile amputation is a rare catastrophe and a serious complication of circumcision. Reconstruction of the glans penis may be indicated following amputation. Our report discusses a novel technique for reconfiguration of an amputated glans penis 1 year after a complicated circumcision. A 2-year-old male infant presented to us with glans penis amputation that had occurred during circumcision 1 year previously. The parents complained of severe meatal stenosis with disfigurement of the penis. Penis length was 3 cm. Complete penile degloving was performed. The distal part of the remaining penis was prepared by removing fibrous tissue. A buccal mucosal graft was applied to the distal part of the penis associated with meatotomy. The use of a buccal mucosal graft is a successful and simple procedure with acceptable cosmetic and functional results for late reconfiguration of the glans penis after amputation when penile size is suitable. PMID:25512820
Partial molar enthalpies and reaction enthalpies from equilibrium molecular dynamics simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnell, Sondre K.; Department of Chemical and Biomolecular Engineering, University of California, Berkeley, California 94720; Department of Chemistry, Faculty of Natural Science and Technology, Norwegian University of Science and Technology, 4791 Trondheim
2014-10-14
We present a new molecular simulation technique for determining partial molar enthalpies in mixtures of gases and liquids from single simulations, without relying on particle insertions, deletions, or identity changes. The method can also be applied to systems with chemical reactions. We demonstrate our method for binary mixtures of Weeks-Chandler-Anderson particles by comparing with conventional simulation techniques, as well as for a simple model that mimics a chemical reaction. The method considers small subsystems inside a large reservoir (i.e., the simulation box), and uses the construction of Hill to compute properties in the thermodynamic limit from small-scale fluctuations. Results obtainedmore » with the new method are in excellent agreement with those from previous methods. Especially for modeling chemical reactions, our method can be a valuable tool for determining reaction enthalpies directly from a single MD simulation.« less
Towards metering tap water by Lorentz force velocimetry
NASA Astrophysics Data System (ADS)
Vasilyan, Suren; Ebert, Reschad; Weidner, Markus; Rivero, Michel; Halbedel, Bernd; Resagk, Christian; Fröhlich, Thomas
2015-11-01
In this paper, we present enhanced flow rate measurement by applying the contactless Lorentz Force Velocimetry (LFV) technique. Particularly, we show that the LFV is a feasible technique for metering the flow rate of salt water in a rectangular channel. The measurements of the Lorentz forces as a function of the flow rate are presented for different electrical conductivities of the salt water. The smallest value of conductivity is achieved at 0.06 S·m-1, which corresponds to the typical value of tap water. In comparison with previous results, the performance of LFV is improved by approximately 2 orders of magnitude by means of a high-precision differential force measurement setup. Furthermore, the sensitivity curve and the calibration factor of the flowmeter are provided based on extensive measurements for the flow velocities ranging from 0.2 to 2.5 m·s-1 and conductivities ranging from 0.06 to 10 S·m-1.
[Legal aspects of post-mortem radiology in the Netherlands].
Venderink, W; Dute, J C J
2016-01-01
In the Netherlands, the application of post-mortem radiology (virtual autopsy) is on the rise. Contrary to conventional autopsy, with post-mortem radiology the body remains intact. There is uncertainty concerning the legal admissibility of post-mortem radiology, since the Dutch Corpse Disposal Act does not contain any specific regulations for this technique. Autopsy and post-mortem radiology differ significantly from a technical aspect, but these differences do not have far-reaching legal consequences from a legal perspective. Even though the body remains intact during post-mortem radiology, the bodily integrity of a deceased person is breached if it would be applied without previously obtained consent. This permission can only be obtained after the relatives are fully informed about the proposed activity. In this respect, it is not relevant which technique is used, be it post-mortem radiology or autopsy. Therefore, the other legal conditions for post-mortem radiology are essentially identical to those for autopsy.
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array
NASA Astrophysics Data System (ADS)
Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi
2014-11-01
We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foroughi, Leila M.; Kang, You-Na; Matzger, Adam J.
Obtaining single crystals for X-ray diffraction remains a major bottleneck in structural biology; when existing crystal growth methods fail to yield suitable crystals, often the target rather than the crystallization approach is reconsidered. Here we demonstrate that polymer-induced heteronucleation, a powerful technique that has been used for small molecule crystallization form discovery, can be applied to protein crystallization by optimizing the heteronucleant composition and crystallization formats for crystallizing a wide range of protein targets. Applying these advances to two benchmark proteins resulted in dramatically increased crystal size, enabling structure determination, for a half century old form of bovine liver catalasemore » (BLC) that had previously only been characterized by electron microscopy, and the discovery of two new forms of concanavalin A (conA) from the Jack bean and accompanying structural elucidation of one of these forms.« less
Hu, Xiao Liang; Ciaglia, Riccardo; Pietrucci, Fabio; Gallet, Grégoire A; Andreoni, Wanda
2014-06-19
We introduce a new ab initio derived reactive potential for the simulation of CdTe within density functional theory (DFT) and apply it to calculate both static and dynamical properties of a number of systems (bulk solid, defective structures, liquid, surfaces) at finite temperature. In particular, we also consider cases with low sulfur concentration (CdTe:S). The analysis of DFT and classical molecular dynamics (MD) simulations performed with the same protocol leads to stringent performance tests and to a detailed comparison of the two schemes. Metadynamics techniques are used to empower both Car-Parrinello and classical molecular dynamics for the simulation of activated processes. For the latter, we consider surface reconstruction and sulfur diffusion in the bulk. The same procedures are applied using previously proposed force fields for CdTe and CdTeS materials, thus allowing for a detailed comparison of the various schemes.
A Bibliometric Analysis on Cancer Population Science with Topic Modeling.
Li, Ding-Cheng; Rastegar-Mojarad, Majid; Okamoto, Janet; Liu, Hongfang; Leichow, Scott
2015-01-01
Bibliometric analysis is a research method used in library and information science to evaluate research performance. It applies quantitative and statistical analyses to describe patterns observed in a set of publications and can help identify previous, current, and future research trends or focus. To better guide our institutional strategic plan in cancer population science, we conducted bibliometric analysis on publications of investigators currently funded by either Division of Cancer Preventions (DCP) or Division of Cancer Control and Population Science (DCCPS) at National Cancer Institute. We applied two topic modeling techniques: author topic modeling (AT) and dynamic topic modeling (DTM). Our initial results show that AT can address reasonably the issues related to investigators' research interests, research topic distributions and popularities. In compensation, DTM can address the evolving trend of each topic by displaying the proportion changes of key words, which is consistent with the changes of MeSH headings.
3D super-resolution imaging with blinking quantum dots
Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R.
2013-01-01
Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots, and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (FWHM) of 8–17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3–7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells. PMID:24093439
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Mergers of Black-Hole Binaries with Aligned Spins: Waveform Characteristics
NASA Technical Reports Server (NTRS)
Kelly, Bernard J.; Baker, John G.; vanMeter, James R.; Boggs, William D.; McWilliams, Sean T.; Centrella, Joan
2011-01-01
"We apply our gravitational-waveform analysis techniques, first presented in the context of nonspinning black holes of varying mass ratio [1], to the complementary case of equal-mass spinning black-hole binary systems. We find that, as with the nonspinning mergers, the dominant waveform modes phases evolve together in lock-step through inspiral and merger, supporting the previous model of the binary system as an adiabatically rigid rotator driving gravitational-wave emission - an implicit rotating source (IRS). We further apply the late-merger model for the rotational frequency introduced in [1], along with a new mode amplitude model appropriate for the dominant (2, plus or minus 2) modes. We demonstrate that this seven-parameter model performs well in matches with the original numerical waveform for system masses above - 150 solar mass, both when the parameters are freely fit, and when they are almost completely constrained by physical considerations."
Application of CFD to the analysis and design of high-speed inlets
NASA Technical Reports Server (NTRS)
Rose, William C.
1995-01-01
Over the past seven years, efforts under the present Grant have been aimed at being able to apply modern Computational Fluid Dynamics to the design of high-speed engine inlets. In this report, a review of previous design capabilities (prior to the advent of functioning CFD) was presented and the example of the NASA 'Mach 5 inlet' design was given as the premier example of the historical approach to inlet design. The philosophy used in the Mach 5 inlet design was carried forward in the present study, in which CFD was used to design a new Mach 10 inlet. An example of an inlet redesign was also shown. These latter efforts were carried out using today's state-of-the-art, full computational fluid dynamics codes applied in an iterative man-in-the-loop technique. The potential usefulness of an automated machine design capability using an optimizer code was also discussed.
Hydraulic containment: analytical and semi-analytical models for capture zone curve delineation
NASA Astrophysics Data System (ADS)
Christ, John A.; Goltz, Mark N.
2002-05-01
We present an efficient semi-analytical algorithm that uses complex potential theory and superposition to delineate the capture zone curves of extraction wells. This algorithm is more flexible than previously published techniques and allows the user to determine the capture zone for a number of arbitrarily positioned extraction wells pumping at different rates. The algorithm is applied to determine the capture zones and optimal well spacing of two wells pumping at different flow rates and positioned at various orientations to the direction of regional groundwater flow. The algorithm is also applied to determine capture zones for non-colinear three-well configurations as well as to determine optimal well spacing for up to six wells pumping at the same rate. We show that the optimal well spacing is found by minimizing the difference in the stream function evaluated at the stagnation points.
Gore, Sally A; Nordberg, Judith M; Palmer, Lisa A; Piorun, Mary E
2009-07-01
This study analyzed trends in research activity as represented in the published research in the leading peer-reviewed professional journal for health sciences librarianship. Research articles were identified from the Bulletin of the Medical Library Association and Journal of the Medical Library Association (1991-2007). Using content analysis and bibliometric techniques, data were collected for each article on the (1) subject, (2) research method, (3) analytical technique used, (4) number of authors, (5) number of citations, (6) first author affiliation, and (7) funding source. The results were compared to a previous study, covering the period 1966 to 1990, to identify changes over time. Of the 930 articles examined, 474 (51%) were identified as research articles. Survey (n = 174, 37.1%) was the most common methodology employed, quantitative descriptive statistics (n = 298, 63.5%) the most used analytical technique, and applied topics (n = 332, 70%) the most common type of subject studied. The majority of first authors were associated with an academic health sciences library (n = 264, 55.7%). Only 27.4% (n = 130) of studies identified a funding source. This study's findings demonstrate that progress is being made in health sciences librarianship research. There is, however, room for improvement in terms of research methodologies used, proportion of applied versus theoretical research, and elimination of barriers to conducting research for practicing librarians.
Acter, Thamina; Kim, Donghwi; Ahmed, Arif; Jin, Jang Mi; Yim, Un Hyuk; Shim, Won Joon; Kim, Young Hwan; Kim, Sunghwan
2016-05-01
This paper presents a detailed investigation of the feasibility of optimized positive and negative atmospheric pressure chemical ionization (APCI) mass spectrometry (MS) and atmospheric pressure photoionization (APPI) MS coupled to hydrogen-deuterium exchange (HDX) for structural assignment of diverse oxygen-containing compounds. The important parameters for optimization of HDX MS were characterized. The optimized techniques employed in the positive and negative modes showed satisfactory HDX product ions for the model compounds when dichloromethane and toluene were employed as a co-solvent in APCI- and APPI-HDX, respectively. The evaluation of the mass spectra obtained from 38 oxygen-containing compounds demonstrated that the extent of the HDX of the ions was structure-dependent. The combination of information provided by different ionization techniques could be used for better speciation of oxygen-containing compounds. For example, (+) APPI-HDX is sensitive to compounds with alcohol, ketone, or aldehyde substituents, while (-) APPI-HDX is sensitive to compounds with carboxylic functional groups. In addition, the compounds with alcohol can be distinguished from other compounds by the presence of exchanged peaks. The combined information was applied to study chemical compositions of degraded oils. The HDX pattern, double bond equivalent (DBE) distribution, and previously reported oxidation products were combined to predict structures of the compounds produced from oxidation of oil. Overall, this study shows that APCI- and APPI-HDX MS are useful experimental techniques that can be applied for the structural analysis of oxygen-containing compounds.
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Dittmann, Jana
2015-03-01
The possibility of forging latent fingerprints at crime scenes is known for a long time. Ever since it has been stated that an expert is capable of recognizing the presence of multiple identical latent prints as an indicator towards forgeries. With the possibility of printing fingerprint patterns to arbitrary surfaces using affordable ink- jet printers equipped with artificial sweat, it is rather simple to create a multitude of fingerprints with slight variations to avoid raising any suspicion. Such artificially printed fingerprints are often hard to detect during the analysis procedure. Moreover, the visibility of particular detection properties might be decreased depending on the utilized enhancement and acquisition technique. In previous work primarily such detection properties are used in combination with non-destructive high resolution sensory and pattern recognition techniques to detect fingerprint forgeries. In this paper we apply Benford's Law in the spatial domain to differentiate between real latent fingerprints and printed fingerprints. This technique has been successfully applied in media forensics to detect image manipulations. We use the differences between Benford's Law and the distribution of the most significant digit of the intensity and topography data from a confocal laser scanning microscope as features for a pattern recognition based detection of printed fingerprints. Our evaluation based on 3000 printed and 3000 latent print samples shows a very good detection performance of up to 98.85% using WEKA's Bagging classifier in a 10-fold stratified cross-validation.
Williams, J.H.; Paillet, Frederick L.
2002-01-01
Cross-borehole flowmeter pulse tests define subsurface connections between discrete fractures using short stress periods to monitor the propagation of the pulse through the flow system. This technique is an improvement over other cross-borehole techniques because measurements can be made in open boreholes without packers or previous identification of water-producing intervals. The method is based on the concept of monitoring the propagation of pulses rather than steady flow through the fracture network. In this method, a hydraulic stress is applied to a borehole connected to a single, permeable fracture, and the distribution of flow induced by that stress monitored in adjacent boreholes. The transient flow responses are compared to type curves computed for several different types of fracture connections. The shape of the transient flow response indicates the type of fracture connection, and the fit of the data to the type curve yields an estimate of its transmissivity and storage coefficient. The flowmeter pulse test technique was applied in fractured shale at a volatile-organic contaminant plume in Watervliet, New York. Flowmeter and other geophysical logs were used to identify permeable fractures in eight boreholes in and near the contaminant plume using single-borehole flow measurements. Flowmeter cross-hole pulse tests were used to identify connections between fractures detected in the boreholes. The results indicated a permeable fracture network connecting many of the individual boreholes, and demonstrated the presence of an ambient upward hydraulic-head gradient throughout the site.
Selection of examples in case-based computer-aided decision systems
Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.
2013-01-01
Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system’s diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system’s performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2–4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system. PMID:18854606
Pan, Zhujun; Su, Xiwen; Fang, Qun; Hou, Lijuan; Lee, Younghan; Chen, Chih C; Lamberth, John; Kim, Mi-Lyang
2018-01-01
Aging is a process associated with a decline in cognitive and motor functions, which can be attributed to neurological changes in the brain. Tai Chi, a multimodal mind-body exercise, can be practiced by people across all ages. Previous research identified effects of Tai Chi practice on delaying cognitive and motor degeneration. Benefits in behavioral performance included improved fine and gross motor skills, postural control, muscle strength, and so forth. Neural plasticity remained in the aging brain implies that Tai Chi-associated benefits may not be limited to the behavioral level. Instead, neurological changes in the human brain play a significant role in corresponding to the behavioral improvement. However, previous studies mainly focused on the effects of behavioral performance, leaving neurological changes largely unknown. This systematic review summarized extant studies that used brain imaging techniques and EEG to examine the effects of Tai Chi on older adults. Eleven articles were eligible for the final review. Three neuroimaging techniques including fMRI ( N = 6), EEG ( N = 4), and MRI ( N = 1), were employed for different study interests. Significant changes were reported on subjects' cortical thickness, functional connectivity and homogeneity of the brain, and executive network neural function after Tai Chi intervention. The findings suggested that Tai Chi intervention give rise to beneficial neurological changes in the human brain. Future research should develop valid and convincing study design by applying neuroimaging techniques to detect effects of Tai Chi intervention on the central nervous system of older adults. By integrating neuroimaging techniques into randomized controlled trials involved with Tai Chi intervention, researchers can extend the current research focus from behavioral domain to neurological level.
Rayleigh Wave Group Velocity Tomography from Microseisms in the Acambay Graben
NASA Astrophysics Data System (ADS)
Valderrama Membrillo, S.; Aguirre, J.; Zuñiga-Davila, R.; Iglesias, A.
2017-12-01
The Acambay graben is one of the most outstanding structures of the Trans-Mexican Volcanic Belt. The Acambay graben has a length of 80km and 15 to 18 km wide and reaches a maximum height of 400 m in its central part. We obtained the group velocity seismic tomography for the Acamaby graben for three different frequencies (f = 0.1, 0.2 and 0.3 Hz). The graben was divided into 6x6 km cells for the tomography and covered a total area of 1008 km2. Seismic noise data from 10 broadband seismic stations near the Acambay graben were used to extract the surface wave arrival-times between all station pairs. The Green's function was recovered in each stations pair by cross-correlation technique. This technique was applied to seismic recordings collected on the vertical component of 10 broadband stations for a continuous recording period of 5 months. Data processing consisted of removing instrumental response, mean, and trend. After that, we applied time domain normalization, a spectral whitening and applied band-pas filtering of 0.1 to 1 Hz. There are shallow studies of the Acambay graben. But little is known of the distribution of deep graben structures. This study estimated the surface wave velocity deep structure. The structures at the frequency 0.3 Hz indicate a lower depth than the remaining frequencies. The result for this frequency show consistencies with previous studies of gravimetry and resistivity, also defines the fault system of Temascalcingo.
Evaluation of area strain response of dielectric elastomer actuator using image processing technique
NASA Astrophysics Data System (ADS)
Sahu, Raj K.; Sudarshan, Koyya; Patra, Karali; Bhaumik, Shovan
2014-03-01
Dielectric elastomer actuator (DEA) is a kind of soft actuators that can produce significantly large electric-field induced actuation strain and may be a basic unit of artificial muscles and robotic elements. Understanding strain development on a pre-stretched sample at different regimes of electrical field is essential for potential applications. In this paper, we report about ongoing work on determination of area strain using digital camera and image processing technique. The setup, developed in house consists of low cost digital camera, data acquisition and image processing algorithm. Samples have been prepared by biaxially stretched acrylic tape and supported between two cardboard frames. Carbon-grease has been pasted on the both sides of the sample, which will be compliant with electric field induced large deformation. Images have been grabbed before and after the application of high voltage. From incremental image area, strain has been calculated as a function of applied voltage on a pre-stretched dielectric elastomer (DE) sample. Area strain has been plotted with the applied voltage for different pre-stretched samples. Our study shows that the area strain exhibits nonlinear relationship with applied voltage. For same voltage higher area strain has been generated on a sample having higher pre-stretched value. Also our characterization matches well with previously published results which have been done with costly video extensometer. The study may be helpful for the designers to fabricate the biaxial pre-stretched planar actuator from similar kind of materials.
Radcliffe, Jon N; Comfort, Paul; Fawcett, Tom
2015-09-01
This study provided the basis by which professional development needs can be addressed and add to the applied sport psychology literature from an underresearched sport domain. This study endeavored to use qualitative methods to explore the specific techniques applied by the strength and conditioning professional. Eighteen participants were recruited for interview, through convenience sampling, drawn from a previously obtained sample. Included in the study were 10 participants working within the United Kingdom, 3 within the United States, and 5 within Australia offering a cross section of experience from ranging sport disciplines and educational backgrounds. Participants were interviewed using semistructured interviews. Thematic clustering was used by interpretative phonological analysis to identify common themes. The practitioners referred to a wealth of psychological skills and strategies that are used within strength and conditioning. Through thematic clustering, it was evident that a significant emphasis is on the development or maintenance of athlete self-confidence specifically with a large focus on goal setting. Similarly, albeit to a lesser extent, there was a notable attention on skill acquisition and arousal management strategies. The strategies used by the practitioners consisted of a combination of cognitive strategies and behavioral strategies. It is important to highlight the main psychological strategies that are suggested by strength and conditioning coaches themselves to guide professional development toward specific areas. Such development should strive to develop coaches' awareness of strategies to develop confidence, regulate arousal, and facilitate skill and technique development.
The Next Era: Deep Learning in Pharmaceutical Research
Ekins, Sean
2016-01-01
Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule’s properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique. PMID:27599991
Morales-Conde, Salvador; Cañete-Gómez, Jesús; Gómez, Virginia; Socas Macías, María; Moreno, Antonio Barranco; Del Agua, Isaias Alarcón; Ruíz, Francisco Javier Padillo
2016-10-01
After reports on laparoendoscopic single-site (LESS) cholecystectomy, concerns have been raised over the level of difficulty and a potential increase in complications when moving away from conventional gold standard multiport laparoscopy due to incomplete exposure and larger umbilical incisions. With continued development of technique and technology, it has now become possible to fully replicate this gold standard procedure through an LESS approach. First experiences with the newly developed technique and instrument are reported. Fifteen patients presenting with cholelithiasis without signs of inflammation were operated using all surgical steps considered appropriate for the conventional four-port laparoscopic approach, but applied through a single access device. Operation-centered outcomes are presented. There were no peri- or postoperative complications. Mean operating time was 32.3 minutes. No conversion to regular laparoscopy was required. The critical view of safety was achieved in all cases. Mean skin incision length was 2.2 cm. The application of a standardized technique combined with the use of a four-port LESS device allows us to perform LESS cholecystectomy, giving us a correct exposure of the structures and without increasing the mean operating time combining previously reported advantages of LESS. A universal trait of any new technique should be safety and reproducibility. This will enhance its applicability by large number of surgeons and to large number of patients requiring cholecystectomy.
Belmon, Laura S; te Velde, Saskia J; Brug, Johannes
2015-01-01
Background Interventions delivered through new device technology, including mobile phone apps, appear to be an effective method to reach young adults. Previous research indicates that self-efficacy and social support for physical activity and self-regulation behavior change techniques (BCT), such as goal setting, feedback, and self-monitoring, are important for promoting physical activity; however, little is known about evaluations by the target population of BCTs applied to physical activity apps and whether these preferences are associated with individual personality characteristics. Objective This study aimed to explore young adults’ opinions regarding BCTs (including self-regulation techniques) applied in mobile phone physical activity apps, and to examine associations between personality characteristics and ratings of BCTs applied in physical activity apps. Methods We conducted a cross-sectional online survey among healthy 18 to 30-year-old adults (N=179). Data on participants’ gender, age, height, weight, current education level, living situation, mobile phone use, personality traits, exercise self-efficacy, exercise self-identity, total physical activity level, and whether participants met Dutch physical activity guidelines were collected. Items for rating BCTs applied in physical activity apps were selected from a hierarchical taxonomy for BCTs, and were clustered into three BCT categories according to factor analysis: “goal setting and goal reviewing,” “feedback and self-monitoring,” and “social support and social comparison.” Results Most participants were female (n=146), highly educated (n=169), physically active, and had high levels of self-efficacy. In general, we observed high ratings of BCTs aimed to increase “goal setting and goal reviewing” and “feedback and self-monitoring,” but not for BCTs addressing “social support and social comparison.” Only 3 (out of 16 tested) significant associations between personality characteristics and BCTs were observed: “agreeableness” was related to more positive ratings of BCTs addressing “goal setting and goal reviewing” (OR 1.61, 95% CI 1.06-2.41), “neuroticism” was related to BCTs addressing “feedback and self-monitoring” (OR 0.76, 95% CI 0.58-1.00), and “exercise self-efficacy” was related to a high rating of BCTs addressing “feedback and self-monitoring” (OR 1.06, 95% CI 1.02-1.11). No associations were observed between personality characteristics (ie, personality, exercise self-efficacy, exercise self-identity) and participants’ ratings of BCTs addressing “social support and social comparison.” Conclusions Young Dutch physically active adults rate self-regulation techniques as most positive and techniques addressing social support as less positive among mobile phone apps that aim to promote physical activity. Such ratings of BCTs differ according to personality traits and exercise self-efficacy. Future research should focus on which behavior change techniques in app-based interventions are most effective to increase physical activity. PMID:26563744
A dynamical systems model for nuclear power plant risk
NASA Astrophysics Data System (ADS)
Hess, Stephen Michael
The recent transition to an open access generation marketplace has forced nuclear plant operators to become much more cost conscious and focused on plant performance. Coincidentally, the regulatory perspective also is in a state of transition from a command and control framework to one that is risk-informed and performance-based. Due to these structural changes in the economics and regulatory system associated with commercial nuclear power plant operation, there is an increased need for plant management to explicitly manage nuclear safety risk. Application of probabilistic risk assessment techniques to model plant hardware has provided a significant contribution to understanding the potential initiating events and equipment failures that can lead to core damage accidents. Application of the lessons learned from these analyses has supported improved plant operation and safety over the previous decade. However, this analytical approach has not been nearly as successful in addressing the impact of plant processes and management effectiveness on the risks of plant operation. Thus, the research described in this dissertation presents a different approach to address this issue. Here we propose a dynamical model that describes the interaction of important plant processes among themselves and their overall impact on nuclear safety risk. We first provide a review of the techniques that are applied in a conventional probabilistic risk assessment of commercially operating nuclear power plants and summarize the typical results obtained. The limitations of the conventional approach and the status of research previously performed to address these limitations also are presented. Next, we present the case for the application of an alternative approach using dynamical systems theory. This includes a discussion of previous applications of dynamical models to study other important socio-economic issues. Next, we review the analytical techniques that are applicable to analysis of these models. Details of the development of the mathematical risk model are presented. This includes discussion of the processes included in the model and the identification of significant interprocess interactions. This is followed by analysis of the model that demonstrates that its dynamical evolution displays characteristics that have been observed at commercially operating plants. The model is analyzed using the previously described techniques from dynamical systems theory. From this analysis, several significant insights are obtained with respect to the effective control of nuclear safety risk. Finally, we present conclusions and recommendations for further research.
Physicochemical characterization and failure analysis of military coating systems
NASA Astrophysics Data System (ADS)
Keene, Lionel Thomas
Modern military coating systems, as fielded by all branches of the U.S. military, generally consist of a diverse array of organic and inorganic components that can complicate their physicochemical analysis. These coating systems consist of VOC-solvent/waterborne automotive grade polyurethane matrix containing a variety of inorganic pigments and flattening agents. The research presented here was designed to overcome the practical difficulties regarding the study of such systems through the combined application of several cross-disciplinary techniques, including vibrational spectroscopy, electron microscopy, microtomy, ultra-fast laser ablation and optical interferometry. The goal of this research has been to determine the degree and spatial progression of weathering-induced alteration of military coating systems as a whole, as well as to determine the failure modes involved, and characterizing the impact of these failures on the physical barrier performance of the coatings. Transmission-mode Fourier Transform Infrared (FTIR) spectroscopy has been applied to cross-sections of both baseline and artificially weathered samples to elucidate weathering-induced spatial gradients to the baseline chemistry of the coatings. A large discrepancy in physical durability (as indicated by the spatial progression of these gradients) has been found between older and newer generation coatings. Data will be shown implicating silica fillers (previously considered inert) as the probable cause for this behavioral divergence. A case study is presented wherein the application of the aforementioned FTIR technique fails to predict the durability of the coating system as a whole. The exploitation of the ultra-fast optical phenomenon of femtosecond (10-15S) laser ablation is studied as a potential tool to facilitate spectroscopic depth profiling of composite materials. Finally, the interferometric technique of Phase Shifting was evaluated as a potential high-sensitivity technique applied to the problem of determining internal stress evolution in curing and aging coatings.
NASA Astrophysics Data System (ADS)
Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.
2014-02-01
Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason M. Harp; Paul A. Demkowicz
2014-10-01
In the High Temperature Gas-Cooled Reactor (HTGR) the TRISO particle fuel serves as the primary fission product containment. However the large number of TRISO particles present in proposed HTGRs dictates that there will be a small fraction (~10 -4 to 10 -5) of as manufactured and in-pile particle failures that will lead to some fission product release. The matrix material surrounding the TRISO particles in fuel compacts and the structural graphite holding the TRISO particles in place can also serve as sinks for containing any released fission products. However data on the migration of solid fission products through these materialsmore » is lacking. One of the primary goals of the AGR-3/4 experiment is to study fission product migration from failed TRISO particles in prototypic HTGR components such as structural graphite and compact matrix material. In this work, the potential for a Gamma Emission Computed Tomography (GECT) technique to non-destructively examine the fission product distribution in AGR-3/4 components and other irradiation experiments is explored. Specifically, the feasibility of using the Idaho National Laboratory (INL) Hot Fuels Examination Facility (HFEF) Precision Gamma Scanner (PGS) system for this GECT application is considered. To test the feasibility, the response of the PGS system to idealized fission product distributions has been simulated using Monte Carlo radiation transport simulations. Previous work that applied similar techniques during the AGR-1 experiment will also be discussed as well as planned uses for the GECT technique during the post irradiation examination of the AGR-2 experiment. The GECT technique has also been applied to other irradiated nuclear fuel systems that were currently available in the HFEF hot cell including oxide fuel pins, metallic fuel pins, and monolithic plate fuel.« less
Kernel-Phase Interferometry for Super-Resolution Detection of Faint Companions
NASA Astrophysics Data System (ADS)
Factor, Samuel M.; Kraus, Adam L.
2017-06-01
Direct detection of close in companions (exoplanets or binary systems) is notoriously difficult. While coronagraphs and point spread function (PSF) subtraction can be used to reduce contrast and dig out signals of companions under the PSF, there are still significant limitations in separation and contrast near λ/D. Non-redundant aperture masking (NRM) interferometry can be used to detect companions well inside the PSF of a diffraction limited image, though the mask discards ˜ 95% of the light gathered by the telescope and thus the technique is severely flux limited. Kernel-phase analysis applies interferometric techniques similar to NRM to a diffraction limited image utilizing the full aperture. Instead of non-redundant closure-phases, kernel-phases are constructed from a grid of points on the full aperture, simulating a redundant interferometer. I have developed a new, easy to use, faint companion detection pipeline which analyzes kernel-phases utilizing Bayesian model comparison. I demonstrate this pipeline on archival images from HST/NICMOS, searching for new companions in order to constrain binary formation models at separations inaccessible to previous techniques. Using this method, it is possible to detect a companion well within the classical λ/D Rayleigh diffraction limit using a fraction of the telescope time as NRM. Since the James Webb Space Telescope (JWST) will be able to perform NRM observations, further development and characterization of kernel-phase analysis will allow efficient use of highly competitive JWST telescope time. As no mask is needed, this technique can easily be applied to archival data and even target acquisition images (e.g. from JWST), making the detection of close in companions cheap and simple as no additional observations are needed.
Symbolic dynamics techniques for complex systems: Application to share price dynamics
NASA Astrophysics Data System (ADS)
Xu, Dan; Beck, Christian
2017-05-01
The symbolic dynamics technique is well known for low-dimensional dynamical systems and chaotic maps, and lies at the roots of the thermodynamic formalism of dynamical systems. Here we show that this technique can also be successfully applied to time series generated by complex systems of much higher dimensionality. Our main example is the investigation of share price returns in a coarse-grained way. A nontrivial spectrum of Rényi entropies is found. We study how the spectrum depends on the time scale of returns, the sector of stocks considered, as well as the number of symbols used for the symbolic description. Overall our analysis confirms that in the symbol space transition probabilities of observed share price returns depend on the entire history of previous symbols, thus emphasizing the need for a modelling based on non-Markovian stochastic processes. Our method allows for quantitative comparisons of entirely different complex systems, for example the statistics of symbol sequences generated by share price returns using 4 symbols can be compared with that of genomic sequences.
Quantifying short-lived events in multistate ionic current measurements.
Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute
2014-02-25
We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.
NASA Technical Reports Server (NTRS)
Cornelius, Michael; Smartt, Ziba; Henrie, Vaughn; Johnson, Mont
2003-01-01
The recent developments in Fabry-Perot fiber optic instruments have resulted in accurate transducers with some of the physical characteristics required for use in obtaining internal data from solid rocket motors. These characteristics include small size, non-electrical excitation, and immunity to electro-magnetic interference. These transducers have not been previously utilized in this environment due to the high temperatures typically encountered. A series of tests were conducted using a 1 1-Inch Hybrid test bed to develop installation techniques that will allow the fiber optic instruments to survive and obtain data for a short period of time following the motor ignition. The installation methods developed during this test series have the potential to allow data to be acquired in the motor chamber, propellant bore, and nozzle during the ignition transient. These measurements would prove to be very useful in the characterization of current motor designs and provide insight into the requirements for further refinements. The process of developing these protective methods and the installation techniques used to apply them is summarized.
Statistical photocalibration of photodetectors for radiometry without calibrated light sources
NASA Astrophysics Data System (ADS)
Yielding, Nicholas J.; Cain, Stephen C.; Seal, Michael D.
2018-01-01
Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.
The Design of Large-Scale Complex Engineered Systems: Present Challenges and Future Promise
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.; McGowan, Anna-Maria Rivas
2012-01-01
Model-Based Systems Engineering techniques are used in the SE community to address the need for managing the development of complex systems. A key feature of the MBSE approach is the use of a model to capture the requirements, architecture, behavior, operating environment and other key aspects of the system. The focus on the model differentiates MBSE from traditional SE techniques that may have a document centric approach. In an effort to assess the benefit of utilizing MBSE on its flight projects, NASA Langley has implemented a pilot program to apply MBSE techniques during the early phase of the Materials International Space Station Experiment-X (MISSE-X). MISSE-X is a Technology Demonstration Mission being developed by the NASA Office of the Chief Technologist i . Designed to be installed on the exterior of the International Space Station (ISS), MISSE-X will host experiments that advance the technology readiness of materials and devices needed for future space exploration. As a follow-on to the highly successful series of previous MISSE experiments on ISS, MISSE-X benefits from a significant interest by the
Bevilacqua, M; Ciarapica, F E; Giacchetta, G
2008-07-01
This work is an attempt to apply classification tree methods to data regarding accidents in a medium-sized refinery, so as to identify the important relationships between the variables, which can be considered as decision-making rules when adopting any measures for improvement. The results obtained using the CART (Classification And Regression Trees) method proved to be the most precise and, in general, they are encouraging concerning the use of tree diagrams as preliminary explorative techniques for the assessment of the ergonomic, management and operational parameters which influence high accident risk situations. The Occupational Injury analysis carried out in this paper was planned as a dynamic process and can be repeated systematically. The CART technique, which considers a very wide set of objective and predictive variables, shows new cause-effect correlations in occupational safety which had never been previously described, highlighting possible injury risk groups and supporting decision-making in these areas. The use of classification trees must not, however, be seen as an attempt to supplant other techniques, but as a complementary method which can be integrated into traditional types of analysis.
Advances in carbonate exploration and reservoir analysis
Garland, J.; Neilson, J.; Laubach, S.E.; Whidden, Katherine J.
2012-01-01
The development of innovative techniques and concepts, and the emergence of new plays in carbonate rocks are creating a resurgence of oil and gas discoveries worldwide. The maturity of a basin and the application of exploration concepts have a fundamental influence on exploration strategies. Exploration success often occurs in underexplored basins by applying existing established geological concepts. This approach is commonly undertaken when new basins ‘open up’ owing to previous political upheavals. The strategy of using new techniques in a proven mature area is particularly appropriate when dealing with unconventional resources (heavy oil, bitumen, stranded gas), while the application of new play concepts (such as lacustrine carbonates) to new areas (i.e. ultra-deep South Atlantic basins) epitomizes frontier exploration. Many low-matrix-porosity hydrocarbon reservoirs are productive because permeability is controlled by fractures and faults. Understanding basic fracture properties is critical in reducing geological risk and therefore reducing well costs and increasing well recovery. The advent of resource plays in carbonate rocks, and the long-standing recognition of naturally fractured carbonate reservoirs means that new fracture and fault analysis and prediction techniques and concepts are essential.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.; Selvin, S.; Close, E.R.
In studying geographic disease distributions, one normally compares rates of arbitrarily defined geographic subareas (e.g. census tracts), thereby sacrificing the geographic detail of the original data. The sparser the data, the larger the subareas must be in order to calculate stable rates. This dilemma is avoided with the technique of Density Equalizing Map Projections (DEMP). Boundaries of geographic subregions are adjusted to equalize population density over the entire study area. Case locations plotted on the transformed map should have a uniform distribution if the underlying disease-rates are constant. On the transformed map, the statistical analysis of the observed distribution ismore » greatly simplified. Even for sparse distributions, the statistical significance of a supposed disease cluster can be reliably calculated. The present report describes the first successful application of the DEMP technique to a sizeable ``real-world`` data set of epidemiologic interest. An improved DEMP algorithm [GUSE93, CLOS94] was applied to a data set previously analyzed with conventional techniques [SATA90, REYN91]. The results from the DEMP analysis and a conventional analysis are compared.« less
New Ground Truth Capability from InSAR Time Series Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, S; Vincent, P; Yang, D
2005-07-13
We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less
Prospective motion correction of high-resolution magnetic resonance imaging data in children.
Brown, Timothy T; Kuperman, Joshua M; Erhart, Matthew; White, Nathan S; Roddey, J Cooper; Shankaranarayanan, Ajit; Han, Eric T; Rettmann, Dan; Dale, Anders M
2010-10-15
Motion artifacts pose significant problems for the acquisition and analysis of high-resolution magnetic resonance imaging data. These artifacts can be particularly severe when studying pediatric populations, where greater patient movement reduces the ability to clearly view and reliably measure anatomy. In this study, we tested the effectiveness of a new prospective motion correction technique, called PROMO, as applied to making neuroanatomical measures in typically developing school-age children. This method attempts to address the problem of motion at its source by keeping the measurement coordinate system fixed with respect to the subject throughout image acquisition. The technique also performs automatic rescanning of images that were acquired during intervals of particularly severe motion. Unlike many previous techniques, this approach adjusts for both in-plane and through-plane movement, greatly reducing image artifacts without the need for additional equipment. Results show that the use of PROMO notably enhances subjective image quality, reduces errors in Freesurfer cortical surface reconstructions, and significantly improves the subcortical volumetric segmentation of brain structures. Further applications of PROMO for clinical and cognitive neuroscience are discussed. Copyright 2010 Elsevier Inc. All rights reserved.
Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong
2018-01-01
Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Electron beams scanning: A novel method
NASA Astrophysics Data System (ADS)
Askarbioki, M.; Zarandi, M. B.; Khakshournia, S.; Shirmardi, S. P.; Sharifian, M.
2018-06-01
In this research, a spatial electron beam scanning is reported. There are various methods for ion and electron beam scanning. The best known of these methods is the wire scanning wherein the parameters of beam are measured by one or more conductive wires. This article suggests a novel method for e-beam scanning without the previous errors of old wire scanning. In this method, the techniques of atomic physics are applied so that a knife edge has a scanner role and the wires have detector roles. It will determine the 2D e-beam profile readily when the positions of the scanner and detectors are specified.
Genetics and Cell Morphology Analyses of the Actinomyces oris srtA Mutant.
Wu, Chenggang; Reardon-Robinson, Melissa Elizabeth; Ton-That, Hung
2016-01-01
Sortase is a cysteine-transpeptidase that anchors LPXTG-containing proteins on the Gram-positive bacterial cell wall. Previously, sortase was considered to be an important factor for bacterial pathogenesis and fitness, but not cell growth. However, the Actinomyces oris sortase is essential for cell viability, due to its coupling to a glycosylation pathway. In this chapter, we describe the methods to generate conditional srtA deletion mutants and identify srtA suppressors by Tn5 transposon mutagenesis. We also provide procedures for analyzing cell morphology of this mutant by thin-section electron microscopy. These techniques can be applied for analyses of other essential genes in A. oris.
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Wide-field two-photon microscopy with temporal focusing and HiLo background rejection
NASA Astrophysics Data System (ADS)
Yew, Elijah Y. S.; Choi, Heejin; Kim, Daekeun; So, Peter T. C.
2011-03-01
Scanningless depth-resolved microscopy is achieved through spatial-temporal focusing and has been demonstrated previously. The advantage of this method is that a large area may be imaged without scanning resulting in higher throughput of the imaging system. Because it is a widefield technique, the optical sectioning effect is considerably poorer than with conventional spatial focusing two-photon microscopy. Here we propose wide-field two-photon microscopy based on spatio-temporal focusing and employing background rejection based on the HiLo microscope principle. We demonstrate the effects of applying HiLo microscopy to widefield temporally focused two-photon microscopy.
Researcher’s Perspective of Substitution Method on Text Steganography
NASA Astrophysics Data System (ADS)
Zamir Mansor, Fawwaz; Mustapha, Aida; Azah Samsudin, Noor
2017-08-01
The linguistic steganography studies are still in the stage of development and empowerment practices. This paper will present several text steganography on substitution methods based on the researcher’s perspective, all scholar paper will analyse and compared. The objective of this paper is to give basic information in the substitution method of text domain steganography that has been applied by previous researchers. The typical ways of this method also will be identified in this paper to reveal the most effective method in text domain steganography. Finally, the advantage of the characteristic and drawback on these techniques in generally also presented in this paper.
Detecting duplicate biological entities using Shortest Path Edit Distance.
Rudniy, Alex; Song, Min; Geller, James
2010-01-01
Duplicate entity detection in biological data is an important research task. In this paper, we propose a novel and context-sensitive Shortest Path Edit Distance (SPED) extending and supplementing our previous work on Markov Random Field-based Edit Distance (MRFED). SPED transforms the edit distance computational problem to the calculation of the shortest path among two selected vertices of a graph. We produce several modifications of SPED by applying Levenshtein, arithmetic mean, histogram difference and TFIDF techniques to solve subtasks. We compare SPED performance to other well-known distance algorithms for biological entity matching. The experimental results show that SPED produces competitive outcomes.
Elastic cavitation and fracture via injection.
Hutchens, Shelby B; Fakhouri, Sami; Crosby, Alfred J
2016-03-07
The cavitation rheology technique extracts soft materials mechanical properties through pressure-monitored fluid injection. Properties are calculated from the system's response at a critical pressure that is governed by either elasticity or fracture (or both); however previous elementary analysis has not been capable of accurately determining which mechanism is dominant. We combine analyses of both mechanisms in order to determine how the full system thermodynamics, including far-field compliance, dictate whether a bubble in an elastomeric solid will grow through either reversible or irreversible deformations. Applying these analyses to experimental data, we demonstrate the sensitivity of cavitation rheology to microstructural variation via a co-dependence between modulus and fracture energy.
Measurement of Charged Pions from Neutrino-produced Nuclear Resonance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Clifford N.
2014-01-01
A method for identifying stopped pions in a high-resolution scintillator bar detector is presented. I apply my technique to measure the axial mass M Δ Afor production of the Δ(1232) resonance by neutrino, with the result M Δ A = 1.16±0.20 GeV (68% CL) (limited by statistics). The result is produced from the measured spectrum of reconstructed momentum-transfer Q 2. I proceed by varying the value of M Δ A in a Rein-Sehgal-based Monte Carlo to produce the best agreement, using shape only (not normalization). The consistency of this result with recent reanalyses of previous bubble-chamber experiments is discussed.
Resistive method for measuring the disintegration speed of Prince Rupert's drops
NASA Astrophysics Data System (ADS)
Bochkov, Mark; Gusenkova, Daria; Glushkov, Evgenii; Zotova, Julia; Zhabin, S. N.
2016-09-01
We have successfully applied the resistance grid technique to measure the disintegration speed in a special type of glass objects, widely known as Prince Rupert's drops. We use a fast digital oscilloscope and a simple electrical circuit, glued to the surface of the drops, to detect the voltage changes, corresponding to the breaks in the specific parts of the drops. The results obtained using this method are in good qualitative and quantitative agreement with theoretical predictions and previously published data. Moreover, the proposed experimental setup does not include any expensive equipment (such as a high-speed camera) and can therefore be widely used in high schools and universities.
Validation of catchment models for predicting land-use and climate change impacts. 1. Method
NASA Astrophysics Data System (ADS)
Ewen, J.; Parkin, G.
1996-02-01
Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).
K, Yıldız; B R, Kayan; T, Dulgeroglu; E, Guneren
2018-04-01
This is a case report of a 63-year-old male patient who presented with rhinophyma of 17 years duration. Several medical treatments were applied previously, with no response or poor improvement. We present our experience by combining the Versajet™ Hydrosurgery System and ReCELL ® in a heavy smoker patient, which led to a good aesthetic outcome. With the combined technique, we did not encounter any difficulties either within the operation or in the follow-up period. We obtained less complications and faster wound healing, which in return led to higher patient satisfaction.
Testing a new application for TOPSIS: monitoring drought and wet periods in Iran
NASA Astrophysics Data System (ADS)
Roshan, Gholamreza; Ghanghermeh, AbdolAzim; Grab, Stefan W.
2018-01-01
Globally, droughts are a recurring major natural disaster owing to below normal precipitation, and are occasionally associated with high temperatures, which together negatively impact upon human health and social, economic, and cultural activities. Drought early warning and monitoring is thus essential for reducing such potential impacts on society. To this end, several experimental methods have previously been proposed for calculating drought, yet these are based almost entirely on precipitation alone. Here, for the first time, and in contrast to previous studies, we use seven climate parameters to establish drought/wet periods; these include: T min, T max, sunshine hours, relative humidity, average rainfall, number of rain days greater than 1 mm, and the ratio of total precipitation to number of days with precipitation, using the technique for order of preference by similarity to ideal solution (TOPSIS) algorithm. To test the TOPSIS method for different climate zones, six sample stations representing a variety of different climate conditions were used by assigning weight changes to climate parameters, which are then applied to the model, together with multivariate regression analysis. For the six stations tested, model results indicate the lowest errors for Zabol station and maximum errors for Kermanshah. The validation techniques strongly support our proposed new method for calculating and rating drought/wet events using TOPSIS.
Yang, Jiaheng; He, Xiaodong; Guo, Ruijun; Xu, Peng; Wang, Kunpeng; Sheng, Cheng; Liu, Min; Wang, Jin; Derevianko, Andrei; Zhan, Mingsheng
2016-09-16
We demonstrate that the coherence of a single mobile atomic qubit can be well preserved during a transfer process among different optical dipole traps (ODTs). This is a prerequisite step in realizing a large-scale neutral atom quantum information processing platform. A qubit encoded in the hyperfine manifold of an ^{87}Rb atom is dynamically extracted from the static quantum register by an auxiliary moving ODT and reinserted into the static ODT. Previous experiments were limited by decoherences induced by the differential light shifts of qubit states. Here, we apply a magic-intensity trapping technique which mitigates the detrimental effects of light shifts and substantially enhances the coherence time to 225±21 ms. The experimentally demonstrated magic trapping technique relies on the previously neglected hyperpolarizability contribution to the light shifts, which makes the light shift dependence on the trapping laser intensity parabolic. Because of the parabolic dependence, at a certain "magic" intensity, the first order sensitivity to trapping light-intensity variations over ODT volume is eliminated. We experimentally demonstrate the utility of this approach and measure hyperpolarizability for the first time. Our results pave the way for constructing scalable quantum-computing architectures with single atoms trapped in an array of magic ODTs.
DANCING IN THE DARK: NEW BROWN DWARF BINARIES FROM KERNEL PHASE INTERFEROMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Benjamin; Tuthill, Peter; Martinache, Frantz, E-mail: bjsp@physics.usyd.edu.au, E-mail: p.tuthill@physics.usyd.edu.au, E-mail: frantz@naoj.org
2013-04-20
This paper revisits a sample of ultracool dwarfs in the solar neighborhood previously observed with the Hubble Space Telescope's NICMOS NIC1 instrument. We have applied a novel high angular resolution data analysis technique based on the extraction and fitting of kernel phases to archival data. This was found to deliver a dramatic improvement over earlier analysis methods, permitting a search for companions down to projected separations of {approx}1 AU on NIC1 snapshot images. We reveal five new close binary candidates and present revised astrometry on previously known binaries, all of which were recovered with the technique. The new candidate binariesmore » have sufficiently close separation to determine dynamical masses in a short-term observing campaign. We also present four marginal detections of objects which may be very close binaries or high-contrast companions. Including only confident detections within 19 pc, we report a binary fraction of at least #Greek Lunate Epsilon Symbol#{sub b} = 17.2{sub -3.7}{sup +5.7}%. The results reported here provide new insights into the population of nearby ultracool binaries, while also offering an incisive case study of the benefits conferred by the kernel phase approach in the recovery of companions within a few resolution elements of the point-spread function core.« less
Fractography of ceramic and metal failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-01-01
STP 827 is organized into the two broad areas of ceramics and metals. The ceramics section covers fracture analysis techniques, surface analysis techniques, and applied fractography. The metals section covers failure analysis techniques, and latest approaches to fractography, and applied fractography.
Demons versus Level-Set motion registration for coronary 18F-sodium fluoride PET.
Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R; Fletcher, Alison; Motwani, Manish; Thomson, Louise E; Germano, Guido; Dey, Damini; Berman, Daniel S; Newby, David E; Slomka, Piotr J
2016-02-27
Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18 F-sodium fluoride ( 18 F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18 F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18 F-NaF PET. To this end, fifteen patients underwent 18 F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18 F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.
Demons versus level-set motion registration for coronary 18F-sodium fluoride PET
NASA Astrophysics Data System (ADS)
Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.
2016-03-01
Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.
Debbage, P L; Sölder, E; Seidl, S; Hutzler, P; Hugl, B; Ofner, D; Kreczy, A
2001-10-01
We previously applied intravital lectin perfusion in mouse models to elucidate mechanisms underlying vascular permeability. The present work transfers this technique to human models, analysing vascular permeability in macro- and microvessels. Human vascular endothelial surface carbohydrate biochemistry differs significantly from its murine counterpart, lacking alpha-galactosyl epitopes and expressing the L-fucose moiety in the glycocalyx; the poly-N-lactosamine glycan backbone is common to all mammals. We examined extensively lectin binding specificities in sections and in vivo, and then applied the poly-N-lactosamine-specific lectin LEA and the L-fucose-specific lectin UEA-I in human intravital perfusions. Transendothelial transport differed in macrovessels and microvessels. In microvessels of adult human fat tissue, rectal wall and rectal carcinomas, slow transendothelial transport by vesicles was followed by significant retention at the subendothelial basement membrane; paracellular passage was not observed. Passage time exceeded 1 h. Thus we found barrier mechanisms resembling those we described previously in murine tissues. In both adult and fetal macrovessels, the vena saphena magna and the umbilical vein, respectively, rapid passage across the endothelial lining was observed, the tracer localising completely in the subendothelial tissues within 15 min; vesicular transport was more rapid than in microvessels, and retention at the subendothelial basement membrane briefer.
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone.
Nelson, Amber M; Hoffman, Joseph J; Anderson, Christian C; Holland, Mark R; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G
2011-10-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. © 2011 Acoustical Society of America
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone
Nelson, Amber M.; Hoffman, Joseph J.; Anderson, Christian C.; Holland, Mark R.; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G.
2011-01-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. PMID:21973378
NASA Astrophysics Data System (ADS)
Shcherbakov, Alexandre S.; Campos Acosta, Joaquin; Moreno Zarate, Pedro; Mansurova, Svetlana; Il'in, Yurij V.; Tarasov, Il'ya S.
2010-06-01
We discuss specifically elaborated approach for characterizing the train-average parameters of low-power picosecond optical pulses with the frequency chirp, arranged in high-repetition-frequency trains, in both time and frequency domains. This approach had been previously applied to rather important case of pulse generation when a single-mode semiconductor heterolaser operates in a multi-pulse regime of the active mode-locking with an external single-mode fiber cavity. In fact, the trains of optical dissipative solitary pulses, which appear under a double balance between mutually compensating actions of dispersion and nonlinearity as well as gain and optical losses, are under characterization. However, in the contrast with the previous studies, now we touch an opportunity of describing two chirped optical pulses together. The main reason of involving just a pair of pulses is caused by the simplest opportunity for simulating the properties of just a sequence of pulses rather then an isolated pulse. However, this step leads to a set of specific difficulty inherent generally in applying joint time-frequency distributions to groups of signals and consisting in manifestation of various false signals or artefacts. This is why the joint Chio-Williams time-frequency distribution and the technique of smoothing are under preliminary consideration here.
Amundsen, Lotta K; Sirén, Heli
2007-10-01
ACE is a popular technique for evaluating association constants between drugs and proteins. However, ACE has not previously been applied to study the association between electrically neutral biomolecules and plasma proteins. We studied the affinity between human and bovine serum albumins (HSA and BSA, respectively) and three neutral endogenous steroid hormones (testosterone, epitestosterone and androstenedione) and two synthetic analogues (methyltestosterone and fluoxymesterone) by applying the partial-filling technique in ACE (PF-ACE). From the endocrinological point of view, the distribution of endogenous steroids among plasma components is of great interest. Strong interactions with albumins suppress the biological activity of steroids. Notable differences in the association constants were observed. In the case of the endogenous steroids, the interactions between testosterone and the albumins were strongest, and those between androstenedione and the albumins were substantially weaker. The association constants, K(b), for testosterone, epitestosterone and androstenedione and HSA at 37 degrees C were 32 100 +/- 3600, 21 600 +/- 1500 and 13 300 +/- 1300 M(-1), respectively, while the corresponding values for the steroids and BSA were 18 800 +/- 1500, 14 000 +/- 400 and 7800 +/- 900 M(-1). Methyltestosterone was bound even more strongly than testosterone, while fluoxymesterone was only weakly bound by the albumins. Finally, the steroids were separated by PF-ACE with HSA and BSA used as resolving components.
Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres
2017-08-01
The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Özdemir, Burcin; Huang, Wenting; Plettl, Alfred; Ziemann, Paul
2015-03-01
A consecutive fabrication approach of independently tailored gradients of the topographical parameters distance, diameter and height in arrays of well-ordered nanopillars on smooth SiO2-Si-wafers is presented. For this purpose, previously reported preparation techniques are further developed and combined. First, self-assembly of Au-salt loaded micelles by dip-coating with computer-controlled pulling-out velocities and subsequent hydrogen plasma treatment produce quasi-hexagonally ordered, 2-dimensional arrays of Au nanoparticles (NPs) with unidirectional variations of the interparticle distances along the pulling direction between 50-120 nm. Second, the distance (or areal density) gradient profile received in this way is superimposed with a diameter-controlled gradient profile of the NPs applying a selective photochemical growth technique. For demonstration, a 1D shutter is used for locally defined UV exposure times to prepare Au NP size gradients varying between 12 and 30 nm. Third, these double-gradient NP arrangements serve as etching masks in a following reactive ion etching step delivering arrays of nanopillars. For height gradient generation, the etching time is locally controlled by applying a shutter made from Si wafer piece. Due to the high flexibility of the etching process, the preparation route works on various materials such as cover slips, silicon, silicon oxide, silicon nitride and silicon carbide.
Erokwu, Bernadette O; Anderson, Christian E; Flask, Chris A; Dell, Katherine M
2018-05-01
BackgroundAutosomal recessive polycystic kidney disease (ARPKD) is associated with significant mortality and morbidity, and currently, there are no disease-specific treatments available for ARPKD patients. One major limitation in establishing new therapies for ARPKD is a lack of sensitive measures of kidney disease progression. Magnetic resonance imaging (MRI) can provide multiple quantitative assessments of the disease.MethodsWe applied quantitative image analysis of high-resolution (noncontrast) T2-weighted MRI techniques to study cystic kidney disease progression and response to therapy in the PCK rat model of ARPKD.ResultsSerial imaging over a 2-month period demonstrated that renal cystic burden (RCB, %)=[total cyst volume (TCV)/total kidney volume (TKV) × 100], TCV, and, to a lesser extent, TKV detected cystic kidney disease progression, as well as the therapeutic effect of octreotide, a clinically available medication shown previously to slow both kidney and liver disease progression in this model. All three MRI measures correlated significantly with histologic measures of renal cystic area, although the correlation of RCB and TCV was stronger than that of TKV.ConclusionThese preclinical MRI results provide a basis for applying these quantitative MRI techniques in clinical studies, to stage and measure progression in human ARPKD kidney disease.
NASA Astrophysics Data System (ADS)
Lange-Asschenfeldt, Bernhard; Alborova, Alena; Krüger-Corcoran, Daniela; Patzelt, Alexa; Richter, Heike; Sterry, Wolfram; Kramer, Axel; Stockfleth, Eggert; Lademann, Jürgen
2009-09-01
Epidermal wound healing is a complex and dynamic regenerative process necessary to reestablish skin integrity. Fluorescence confocal laser scanning microscopy (FLSM) is a noninvasive imaging technique that has previously been used for evaluation of inflammatory and neoplastic skin disorders in vivo and at high resolution. We employed FLSM to investigate the evolution of epidermal wound healing noninvasively over time and in vivo. Two suction blisters were induced on the volar forearms of the study participants, followed by removal of the epidermis. To study the impact of wound ointment on the process of reepithelization, test sites were divided into two groups, of which one test site was left untreated as a negative control. FLSM was used for serial/consecutive evaluations up to 8 days. FLSM was able to visualize the development of thin keratinocyte layers developing near the wound edge and around hair follicles until the entire epidermis has been reestablished. Wounds treated with the wound ointment were found to heal significantly faster than untreated wounds. This technique allows monitoring of the kinetics of wound healing noninvasively and over time, while offering new insights into the potential effects of topically applied drugs on the process of tissue repair.
On the performance of SART and ART algorithms for microwave imaging
NASA Astrophysics Data System (ADS)
Aprilliyani, Ria; Prabowo, Rian Gilang; Basari
2018-02-01
The development of advanced technology leads to the change of human lifestyle in current society. One of the disadvantage impact is arising the degenerative diseases such as cancers and tumors, not just common infectious diseases. Every year, victims of cancers and tumors grow significantly leading to one of the death causes in the world. In early stage, cancer/tumor does not have definite symptoms, but it will grow abnormally as tissue cells and damage normal tissue. Hence, early cancer detection is required. Some common diagnostics modalities such as MRI, CT and PET are quite difficult to be operated in home or mobile environment such as ambulance. Those modalities are also high cost, unpleasant, complex, less safety and harder to move. Hence, this paper proposes a microwave imaging system due to its portability and low cost. In current study, we address on the performance of simultaneous algebraic reconstruction technique (SART) algorithm that was applied in microwave imaging. In addition, SART algorithm performance compared with our previous work on algebraic reconstruction technique (ART), in order to have performance comparison, especially in the case of reconstructed image quality. The result showed that by applying SART algorithm on microwave imaging, suspicious cancer/tumor can be detected with better image quality.
NASA Astrophysics Data System (ADS)
Padeletti, G.; Fermo, P.
Lustre was one of the most sophisticated techniques for the decoration of majolicas during the Renaissance period. Lustre consists of a thin metallic film containing silver, copper and other substances like iron oxide and cinnabar applied in a reducing atmosphere on a previously glazed ceramic. In this way, beautiful iridescent reflections of different colours (in particular gold and ruby-red) are obtained. The characterisation and the study of lustre-decorated majolicas is of great interest for archaeologists, but also offers possibilities for producing pottery with outstanding decoration today, following ancient examples, since nowadays Italian artisans are interested in the reproduction of the ancient recipes and procedures. Moreover, it can even suggest new procedures for obtaining uniform thin metallic films for technological applications. A study has been carried out on ancient lustre layers using numerous different analytical techniques such as XRD, SEM-EDX, TEM-EDX-SAED, ETAAS, ICP-OES, UV-vis reflectance spectroscopy and SAXS. Lustre films were shown to be formed by copper and silver clusters of nanometric dimension. The colour and the properties of the lustre films depend on the elemental composition of the impasto applied to the ceramic surface as well as on other factors like the metallic nanocluster dimension, the firing conditions, the underlying glaze composition and the procedure used.
Differentially Private Empirical Risk Minimization
Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.
2011-01-01
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342
Mills, Travis; Lalancette, Marc; Moses, Sandra N; Taylor, Margot J; Quraan, Maher A
2012-07-01
Magnetoencephalography provides precise information about the temporal dynamics of brain activation and is an ideal tool for investigating rapid cognitive processing. However, in many cognitive paradigms visual stimuli are used, which evoke strong brain responses (typically 40-100 nAm in V1) that may impede the detection of weaker activations of interest. This is particularly a concern when beamformer algorithms are used for source analysis, due to artefacts such as "leakage" of activation from the primary visual sources into other regions. We have previously shown (Quraan et al. 2011) that we can effectively reduce leakage patterns and detect weak hippocampal sources by subtracting the functional images derived from the experimental task and a control task with similar stimulus parameters. In this study we assess the performance of three different subtraction techniques. In the first technique we follow the same post-localization subtraction procedures as in our previous work. In the second and third techniques, we subtract the sensor data obtained from the experimental and control paradigms prior to source localization. Using simulated signals embedded in real data, we show that when beamformers are used, subtraction prior to source localization allows for the detection of weaker sources and higher localization accuracy. The improvement in localization accuracy exceeded 10 mm at low signal-to-noise ratios, and sources down to below 5 nAm were detected. We applied our techniques to empirical data acquired with two different paradigms designed to evoke hippocampal and frontal activations, and demonstrated our ability to detect robust activations in both regions with substantial improvements over image subtraction. We conclude that removal of the common-mode dominant sources through data subtraction prior to localization further improves the beamformer's ability to project the n-channel sensor-space data to reveal weak sources of interest and allows more accurate localization.
ERIC Educational Resources Information Center
Al-Saggaf, Yeslam; Burmeister, Oliver K.
2012-01-01
This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of…
Applying knowledge compilation techniques to model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
In vivo facet joint loading of the canine lumbar spine.
Buttermann, G R; Schendel, M J; Kahmann, R D; Lewis, J L; Bradford, D S
1992-01-01
This study describes a technique to measure in vivo loads and the resultant load-contact locations in the facet joint of the canine lumbar spine. The technique is a modification of a previously described in vitro method that used calibrated surface strains of the lateral aspect of the right L3 cranial articular process. In the present study, strains were measured during various in vivo static and dynamic activities 3 days after strain gage implantation. The in vivo recording technique and its errors, which depend on the location of the applied facet loads, is described. The results of applying the technique to five dogs gave the following results. Relative resultant contact load locations on the facet tended to be in the central and caudal portion of the facet in extension activities, central and cranial in standing, and cranial and ventral in flexion or right-turning activities. Right-turning contact locations were ventral and cranial to left-turning locations. Resultant load locations at peak loading during walking were in the central region of the facet, whereas resultant load locations at minimum loading during walking were relatively craniad. This resultant load-contact location during a walk gait cycle typically migrated in an arc with a displacement of 4 mm from minimum to maximum loading. Static tests resulted in a range of facet loads of 0 N in flexion and lying to 185 N for two-legged standing erect, and stand resulted in facet loads of 26 +/- 15 N (mean +/- standard deviation [SD]). Dynamic tests resulted in peak facet loads ranging from 55 N while walking erect to 170 N for climbing up stairs. Maximum walk facet loads were 107 +/- 27 N. The technique is applicable to in vivo studies of a canine facet joint osteoarthritis model and may be useful for establishing an understanding of the biomechanics of low-back pain.
NASA Astrophysics Data System (ADS)
Ibrahim, Wael Refaat Anis
The present research involves the development of several fuzzy expert systems for power quality analysis and diagnosis. Intelligent systems for the prediction of abnormal system operation were also developed. The performance of all intelligent modules developed was either enhanced or completely produced through adaptive fuzzy learning techniques. Neuro-fuzzy learning is the main adaptive technique utilized. The work presents a novel approach to the interpretation of power quality from the perspective of the continuous operation of a single system. The research includes an extensive literature review pertaining to the applications of intelligent systems to power quality analysis. Basic definitions and signature events related to power quality are introduced. In addition, detailed discussions of various artificial intelligence paradigms as well as wavelet theory are included. A fuzzy-based intelligent system capable of identifying normal from abnormal operation for a given system was developed. Adaptive neuro-fuzzy learning was applied to enhance its performance. A group of fuzzy expert systems that could perform full operational diagnosis were also developed successfully. The developed systems were applied to the operational diagnosis of 3-phase induction motors and rectifier bridges. A novel approach for learning power quality waveforms and trends was developed. The technique, which is adaptive neuro fuzzy-based, learned, compressed, and stored the waveform data. The new technique was successfully tested using a wide variety of power quality signature waveforms, and using real site data. The trend-learning technique was incorporated into a fuzzy expert system that was designed to predict abnormal operation of a monitored system. The intelligent system learns and stores, in compressed format, trends leading to abnormal operation. The system then compares incoming data to the retained trends continuously. If the incoming data matches any of the learned trends, an alarm is instigated predicting the advent of system abnormal operation. The incoming data could be compared to previous trends as well as matched to trends developed through computer simulations and stored using fuzzy learning.
Application of real rock pore-threat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakibul, M.; Sarker, H.; McIntyre, D.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data.« less
Application of real rock pore-throat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, M.R.; McIntyre, D.; Ferer, M.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data. Introduction« less
Can captive orangutans (Pongo pygmaeus abelii) be coaxed into cumulative build-up of techniques?
Lehner, Stephan R; Burkart, Judith M; Schaik, Carel P van
2011-11-01
While striking cultural variation in behavior from one site to another has been described in chimpanzees and orangutans, cumulative culture might be unique to humans. Captive chimpanzees were recently found to be rather conservative, sticking to the technique they had mastered, even after more effective alternatives were demonstrated. Behavioral flexibility in problem solving, in the sense of acquiring new solutions after having learned another one earlier, is a vital prerequisite for cumulative build-up of techniques. Here, we experimentally investigate whether captive orangutans show such flexibility, and if so, whether they show techniques that cumulatively build up (ratchet) on previous ones after conditions of the task are changed. We provided nine Sumatran orangutans (Pongo pygmaeus abelii) with two types of transparent tubes partly filled with syrup, along with potential tools such as sticks, twigs, wood wool and paper. In the first phase, the orangutans could reach inside the tubes with their hands (Regular Condition), but in the following phase, tubes had been made too narrow for their hands to fit in (Restricted Condition 1), or in addition the setup lacked their favorite materials (Restricted Condition 2). The orangutans showed high behavioral flexibility, applying nine different techniques under the regular condition in total. Individuals abandoned preferred techniques and switched to different techniques under restricted conditions when this was advantageous. We show for two of these techniques how they cumulatively built up on earlier ones. This suggests that the near-absence of cumulative culture in wild orangutans is not due to a lack of flexibility when existing solutions to tasks are made impossible.
NASA Astrophysics Data System (ADS)
Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng
2018-02-01
The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.
NASA Technical Reports Server (NTRS)
Austin, W. W.
1983-01-01
The effect on LANDSAT data of a Sun angle correction, an intersatellite LANDSAT-2 and LANDSAT-3 data range adjustment, and the atmospheric correction algorithm was evaluated. Fourteen 1978 crop year LACIE sites were used as the site data set. The preprocessing techniques were applied to multispectral scanner channel data and transformed data were plotted and used to analyze the effectiveness of the preprocessing techniques. Ratio transformations effectively reduce the need for preprocessing techniques to be applied directly to the data. Subtractive transformations are more sensitive to Sun angle and atmospheric corrections than ratios. Preprocessing techniques, other than those applied at the Goddard Space Flight Center, should only be applied as an option of the user. While performed on LANDSAT data the study results are also applicable to meteorological satellite data.
Total internal reflection and dynamic light scattering microscopy of gels
NASA Astrophysics Data System (ADS)
Gregor, Brian F.
Two different techniques which apply optical microscopy in novel ways to the study of biological systems and materials were built and applied to several samples. The first is a system for adapting the well-known technique of dynamic light scattering (DLS) to an optical microscope. This can detect and scatter light from very small volumes, as compared to standard DLS which studies light scattering from volumes 1000x larger. The small scattering volume also allows for the observation of nonergodic dynamics in appropriate samples. Porcine gastric mucin (PGM) forms a gel at low pH which lines the epithelial cell layer and acts as a protective barrier against the acidic stomach environment. The dynamics and microscopic viscosity of PGM at different pH levels is studied using polystyrene microspheres as tracer particles. The microscopic viscosity and microrheological properties of the commercial basement membrane Matrigel are also studied with this instrument. Matrigel is frequently used to culture cells and its properties remain poorly determined. Well-characterized and purely synthetic Matrigel substitutes will need to have the correct rheological and morphological characteristics. The second instrument designed and built is a microscope which uses an interferometry technique to achieve an improvement in resolution 2.5x better in one dimension than the Abbe diffraction limit. The technique is based upon the interference of the evanescent field generated on the surface of a prism by a laser in a total internal reflection geometry. The enhanced resolution is demonstrated with fluorescent samples. Additionally. Raman imaging microscopy is demonstrated using the evanescent field in resonant and non-resonant samples, although attempts at applying the enhanced resolution technique to the Raman images were ultimately unsuccessful. Applications of this instrument include high resolution imaging of cell membranes and macroscopic structures in gels and proteins. Finally, a third section incorporating previous research on simulations of complex fluids is included. Two dimensional simulations of oil, water, and surfactant mixtures were computed with a lattice gas method. The simulated systems were randomly mixed and then the temperature was quenched to a predetermined point. Spontaneous micellization is observed for a narrow range of temperature quenches, and the overall growth rate of macroscopic structure is found to follow a Vogel-Fulcher growth law.
Evaluation Of Risk And Possible Mitigation Schemes For Previously Unidentified Hazards
NASA Technical Reports Server (NTRS)
Linzey, William; McCutchan, Micah; Traskos, Michael; Gilbrech, Richard; Cherney, Robert; Slenski, George; Thomas, Walter, III
2006-01-01
This report presents the results of arc track testing conducted to determine if such a transfer of power to un-energized wires is possible and/or likely during an arcing event, and to evaluate an array of protection schemes that may significantly reduce the possibility of such a transfer. The results of these experiments may be useful for determining the level of protection necessary to guard against spurious voltage and current being applied to safety critical circuits. It was not the purpose of these experiments to determine the probability of the initiation of an arc track event only if an initiation did occur could it cause the undesired event: an inadvertent thruster firing. The primary wire insulation used in the Orbiter is aromatic polyimide, or Kapton , a construction known to arc track under certain conditions [3]. Previous Boeing testing has shown that arc tracks can initiate in aromatic polyimide insulated 28 volts direct current (VDC) power circuits using more realistic techniques such as chafing with an aluminum blade (simulating the corner of an avionics box or lip of a wire tray), or vibration of an aluminum plate against a wire bundle [4]. Therefore, an arc initiation technique was chosen that provided a reliable and consistent technique of starting the arc and not a realistic simulation of a scenario on the vehicle. Once an arc is initiated, the current, power and propagation characteristics of the arc depend on the power source, wire gauge and insulation type, circuit protection and series resistance rather than type of initiation. The initiation method employed for these tests was applying an oil and graphite mixture to the ends of a powered twisted pair wire. The flight configuration of the heater circuits, the fuel/oxider (or ox) wire, and the RCS jet solenoid were modeled in the test configuration so that the behavior of these components during an arcing event could be studied. To determine if coil activation would occur with various protection wire schemes, 145 tests were conducted using various fuel/ox wire alternatives (shielded and unshielded) and/or different combinations of polytetrafuloroethylene (PTFE), Mystik tape and convoluted wraps to prevent unwanted coil activation. Test results were evaluated along with other pertinent data and information to develop a mitigation strategy for an inadvertent RCS firing. The SSP evaluated civilian aircraft wiring failures to search for aging trends in assessing the wire-short hazard. Appendix 2 applies Weibull statistical methods to the same data with a similar purpose.
NASA Astrophysics Data System (ADS)
Baker, Paul T.; Caudill, Sarah; Hodge, Kari A.; Talukder, Dipongkar; Capano, Collin; Cornish, Neil J.
2015-03-01
Searches for gravitational waves produced by coalescing black hole binaries with total masses ≳25 M⊙ use matched filtering with templates of short duration. Non-Gaussian noise bursts in gravitational wave detector data can mimic short signals and limit the sensitivity of these searches. Previous searches have relied on empirically designed statistics incorporating signal-to-noise ratio and signal-based vetoes to separate gravitational wave candidates from noise candidates. We report on sensitivity improvements achieved using a multivariate candidate ranking statistic derived from a supervised machine learning algorithm. We apply the random forest of bagged decision trees technique to two separate searches in the high mass (≳25 M⊙ ) parameter space. For a search which is sensitive to gravitational waves from the inspiral, merger, and ringdown of binary black holes with total mass between 25 M⊙ and 100 M⊙ , we find sensitive volume improvements as high as 70±13%-109±11% when compared to the previously used ranking statistic. For a ringdown-only search which is sensitive to gravitational waves from the resultant perturbed intermediate mass black hole with mass roughly between 10 M⊙ and 600 M⊙ , we find sensitive volume improvements as high as 61±4%-241±12% when compared to the previously used ranking statistic. We also report how sensitivity improvements can differ depending on mass regime, mass ratio, and available data quality information. Finally, we describe the techniques used to tune and train the random forest classifier that can be generalized to its use in other searches for gravitational waves.
RESISTANCE TO EXTINCTION AND RELAPSE IN COMBINED STIMULUS CONTEXTS
Podlesnik, Christopher A; Bai, John Y. H; Elliffe, Douglas
2012-01-01
Reinforcing an alternative response in the same context as a target response reduces the rate of occurrence but increases the persistence of that target response. Applied researchers who use such techniques to decrease the rate of a target problem behavior risk inadvertently increasing the persistence of the same problem behavior. Behavioral momentum theory asserts that the increased persistence is a function of the alternative reinforcement enhancing the Pavlovian relation between the target stimulus context and reinforcement. A method showing promise for reducing the persistence-enhancing effects of alternative reinforcement is to train the alternative response in a separate stimulus context before combining with the target stimulus in extinction. The present study replicated previous findings using pigeons by showing that combining an “alternative” richer VI schedule (96 reinforcers/hr) with a “target” leaner VI schedule (24 reinforcers/hr) reduced resistance to extinction of target responding compared with concurrent training of the alternative and target responses (totaling 120 reinforcers/hr). We also found less relapse with a reinstatement procedure following extinction with separate-context training, supporting previous findings that training conditions similarly influence both resistance to extinction and relapse. Finally, combining the alternative stimulus context was less disruptive to target responding previously trained in the concurrent schedule, relative to combining with the target response trained alone. Overall, the present findings suggest the technique of combining stimulus contexts associated with alternative responses with those associated with target responses disrupts target responding. Furthermore, the effectiveness of this disruption is a function of training context of reinforcement for target responding, consistent with assertions of behavioral momentum theory. PMID:23008521
A technique of snaring method for fitting a prosthetic valve into the annulus.
Nagasaka, Shigeo; Kawata, Tetsuji; Matsuta, Masahiro; Taniguchi, Shigeki
2005-01-01
Tourniquetting technique to fit a prosthetic valve (PV) into the annulus in valve replacement surgery has been previously reported. We modified the previously reported method and designed a simpler tying technique. We performed 11 aortic (AVR: including four cases for calcified aortic stenosis (AS) with a small annulus and one cases for infective endocarditis with intramuscular abscess cavity), eight mitral valve replacements (MVR), and one tricuspid valve replacement (TVR: for corrected transposition of the great arteries). A PV was implanted using 2-0 polyester mattress sutures with a pledget. Each of the two tourniquets held a suture at the bottom of the annulus and at the opposite position to fit a PV. The sutures between each snare were tied down from the bottom to the top. In MVR, after seating of a PV with two tourniquets, we could make sure that no native tissue of any preserved mitral apparatus disturbed PV leaflet motion. In calcific AS, a PV had a good fitting into the annulus because of tourniquets applied to unseated part during tying sutures. In AVR for infective endocarditis, mattress sutures supported by a Teflon pledget were placed to close the abscess cavity. After snaring on one of these sutures, we tied down the sutures, ensuring that they did not cut through the friable tissues. In TVR, we found that native leaflets interfered with PV motion after seating down the prosthesis and those leaflets were resected before tying down the sutures. Postoperative transesophageal echocardiography showed no paravalvular leakage in any patients and excellent PV functions.
Multiscale Analysis of Solar Image Data
NASA Astrophysics Data System (ADS)
Young, C. A.; Myers, D. C.
2001-12-01
It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.
Increased-resolution OCT thickness mapping of the human macula: a statistically based registration.
Bernardes, Rui; Santos, Torcato; Cunha-Vaz, José
2008-05-01
To describe the development of a technique that enhances spatial resolution of retinal thickness maps of the Stratus OCT (Carl Zeiss Meditec, Inc., Dublin, CA). A retinal thickness atlas (RT-atlas) template was calculated, and a macular coordinate system was established, to pursue this objective. The RT-atlas was developed from principal component analysis of retinal thickness analyzer (RTA) maps acquired from healthy volunteers. The Stratus OCT radial thickness measurements were registered on the RT-atlas, from which an improved macular thickness map was calculated. Thereafter, Stratus OCT circular scans were registered on the previously calculated map to enhance spatial resolution. The developed technique was applied to Stratus OCT thickness data from healthy volunteers and from patients with diabetic retinopathy (DR) or age-related macular degeneration (AMD). Results showed that for normal, or close to normal, macular thickness maps from healthy volunteers and patients with DR, this technique can be an important aid in determining retinal thickness. Efforts are under way to improve the registration of retinal thickness data in patients with AMD. The developed technique enhances the evaluation of data acquired by the Stratus OCT, helping the detection of early retinal thickness abnormalities. Moreover, a normative database of retinal thickness measurements gained from this technique, as referenced to the macular coordinate system, can be created without errors induced by missed fixation and eye tilt.
Wathen, Brent; Kuiper, Michael; Walker, Virginia; Jia, Zongchao
2003-01-22
A novel computational technique for modeling crystal formation has been developed that combines three-dimensional (3-D) molecular representation and detailed energetics calculations of molecular mechanics techniques with the less-sophisticated probabilistic approach used by statistical techniques to study systems containing millions of molecules undergoing billions of interactions. Because our model incorporates both the structure of and the interaction energies between participating molecules, it enables the 3-D shape and surface properties of these molecules to directly affect crystal formation. This increase in model complexity has been achieved while simultaneously increasing the number of molecules in simulations by several orders of magnitude over previous statistical models. We have applied this technique to study the inhibitory effects of antifreeze proteins (AFPs) on ice-crystal formation. Modeling involving both fish and insect AFPs has produced results consistent with experimental observations, including the replication of ice-etching patterns, ice-growth inhibition, and specific AFP-induced ice morphologies. Our work suggests that the degree of AFP activity results more from AFP ice-binding orientation than from AFP ice-binding strength. This technique could readily be adapted to study other crystal and crystal inhibitor systems, or to study other noncrystal systems that exhibit regularity in the structuring of their component molecules, such as those associated with the new nanotechnologies.
Figure analysis: A teaching technique to promote visual literacy and active Learning.
Wiles, Amy M
2016-07-08
Learning often improves when active learning techniques are used in place of traditional lectures. For many of these techniques, however, students are expected to apply concepts that they have already grasped. A challenge, therefore, is how to incorporate active learning into the classroom of courses with heavy content, such as molecular-based biology courses. An additional challenge is that visual literacy is often overlooked in undergraduate science education. To address both of these challenges, a technique called figure analysis was developed and implemented in three different levels of undergraduate biology courses. Here, students learn content while gaining practice in interpreting visual information by discussing figures with their peers. Student groups also make connections between new and previously learned concepts on their own while in class. The instructor summarizes the material for the class only after students grapple with it in small groups. Students reported a preference for learning by figure analysis over traditional lecture, and female students in particular reported increased confidence in their analytical abilities. There is not a technology requirement for this technique; therefore, it may be utilized both in classrooms and in nontraditional spaces. Additionally, the amount of preparation required is comparable to that of a traditional lecture. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(4):336-344, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.
Atcherson, Samuel R; Damji, Zohra; Upson, Steve
2011-11-01
We explored the feasibility of a subtraction technique described by Friesen and Picton to remove the cochlear implant (CI) artifact to long duration stimuli in the soundfield and using direct input all through the participant's preferred MAP. Friesen and Picton previously explored this technique by recording cortical potentials in four CI users with 1000 pulse per second (pps) stimuli, bypassing the speech processor. Cortical auditory evoked potentials (N1-P2) to 1000 Hz tones were recorded from a post-lingually deafened adult with three different stimulus presentation setups: soundfield to processor T-mic (SF), soundfield to lapel mic (SF-LM), and direct input (DI). Stimuli were presented at 65 dB SPL(A). The SF setup required stabilizing the head to minimize changes in magnitude for the CI artifact. The SF-LM and DI setups did not require head stabilization, but were evaluated as alternatives to the SF setup. Clear N1-P2 responses were obtained with comparable waveform morphologies, amplitudes, and latencies despite some differences in the magnitude of the CI artifact for the different stimulus presentation setups. The results of this study demonstrate that subtraction technique is feasible for recording N1-P2 responses in CI users, though further studies are needed for the three stimulation setups.