Sample records for level set technique

  1. Demons versus Level-Set motion registration for coronary 18F-sodium fluoride PET.

    PubMed

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R; Fletcher, Alison; Motwani, Manish; Thomson, Louise E; Germano, Guido; Dey, Damini; Berman, Daniel S; Newby, David E; Slomka, Piotr J

    2016-02-27

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18 F-sodium fluoride ( 18 F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18 F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18 F-NaF PET. To this end, fifteen patients underwent 18 F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18 F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.

  2. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    NASA Astrophysics Data System (ADS)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.

  3. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  4. Optic disc segmentation: level set methods and blood vessels inpainting

    NASA Astrophysics Data System (ADS)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  5. EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less

  6. EqualWrites: Reducing Intra-set Write Variations for Enhancing Lifetime of Non-volatile Caches

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-01-29

    Driven by the trends of increasing core-count and bandwidth-wall problem, the size of last level caches (LLCs) has greatly increased and hence, the researchers have explored non-volatile memories (NVMs) which provide high density and consume low-leakage power. Since NVMs have low write-endurance and the existing cache management policies are write variation-unaware, effective wear-leveling techniques are required for achieving reasonable cache lifetimes using NVMs. We present EqualWrites, a technique for mitigating intra-set write variation. In this paper, our technique works by recording the number of writes on a block and changing the cache-block location of a hot data-item to redirect themore » future writes to a cold block to achieve wear-leveling. Simulation experiments have been performed using an x86-64 simulator and benchmarks from SPEC06 and HPC (high-performance computing) field. The results show that for single, dual and quad-core system configurations, EqualWrites improves cache lifetime by 6.31X, 8.74X and 10.54X, respectively. In addition, its implementation overhead is very small and it provides larger improvement in lifetime than three other intra-set wear-leveling techniques and a cache replacement policy.« less

  7. New technique for the direct measurement of core noise from aircraft engines

    NASA Technical Reports Server (NTRS)

    Krejsa, E. A.

    1981-01-01

    A new technique is presented for directly measuring the core noise levels from gas turbine aircraft engines. The technique requires that fluctuating pressures be measured in the far-field and at two locations within the engine core. The cross-spectra of these measurements are used to determine the levels of the far-field noise that propagated from the engine core. The technique makes it possible to measure core noise levels even when other noise sources dominate. The technique was applied to signals measured from an AVCO Lycoming YF102 turbofan engine. Core noise levels as a function of frequency and radiation angle were measured and are presented over a range of power settings.

  8. Setting analyst: A practical harvest planning technique

    Treesearch

    Olivier R.M. Halleux; W. Dale Greene

    2001-01-01

    Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...

  9. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    NASA Astrophysics Data System (ADS)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  10. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Time Tagging the Data

    DTIC Science & Technology

    2015-09-01

    this report made use of posttest processing techniques to provide packet-level time tagging with an accuracy close to 3 µs relative to Coordinated...h set of test records. The process described herein made use of posttest processing techniques to provide packet-level time tagging with an accuracy

  11. Modeling wildland fire propagation with level set methods

    Treesearch

    V. Mallet; D.E Keyes; F.E. Fendell

    2009-01-01

    Level set methods are versatile and extensible techniques for general front tracking problems, including the practically important problem of predicting the advance of a fire front across expanses of surface vegetation. Given a rule, empirical or otherwise, to specify the rate of advance of an infinitesimal segment of fire front arc normal to itself (i.e., given the...

  12. Addressing Inter-set Write-Variation for Improving Lifetime of Non-Volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S

    We propose a technique which minimizes inter-set write variation in NVM caches for improving its lifetime. Our technique uses cache coloring scheme to add a software-controlled mapping layer between groups of physical pages (called memory regions) and cache sets. Periodically, the number of writes to different colors of the cache is computed and based on this result, the mapping of a few colors is changed to channel the write traffic to least utilized cache colors. This change helps to achieve wear-leveling.

  13. Ion plating for the future

    NASA Technical Reports Server (NTRS)

    Spalvins, T.

    1981-01-01

    The ion plating techniques are classified relative to the instrumental set up, evaporation media, and mode of transport. A distinction is drawn between the low vacuum (plasma) and high vacuum (ion beam) techniques. Ion plating technology is discussed at the fundamental and industrial level. At the fundamental level, the capabilities and limitations of the plasma (evaporant flux) and film characteristics are evaluated. And on the industrial level, the performance and potential uses of ion plated films are discussed.

  14. Ion plating for the future

    NASA Technical Reports Server (NTRS)

    Spalvins, T.

    1981-01-01

    The ion plating techniques are classified relative to the instrumental set up, evaporation media and mode of transport. Distinction is drawn between the low vacuum (plasma) and high vacuum (ion beam) techniques. Ion plating technology is discussed at the fundamental and industrial level. At the fundamental level, the capabilities and limitations of the plasma (evaporant flux) and film characteristics are evaluated. On the industrial level, the performance and potential uses of ion plated films are discussed.

  15. Perceptions of Teachers towards Assessment Techniques at Secondary Level Private School of Karachi

    ERIC Educational Resources Information Center

    Fatemah, Henna

    2015-01-01

    This paper sets out to explore the perceptions of teachers towards assessment techniques at a secondary level private school of Karachi. This was conjectured on the basis of the circumstances of parallel boards in the education system of Pakistan and its effectiveness within the context with respect to the curriculum. This was gauged in line with…

  16. Universal test fixture for monolithic mm-wave integrated circuits calibrated with an augmented TRD algorithm

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.; Shalkhauser, Kurt A.

    1989-01-01

    The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.

  17. Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Sethian, James A.

    2006-01-01

    Borrowing from techniques developed for conservation law equations, we have developed both monotone and higher order accurate numerical schemes which discretize the Hamilton-Jacobi and level set equations on triangulated domains. The use of unstructured meshes containing triangles (2D) and tetrahedra (3D) easily accommodates mesh adaptation to resolve disparate level set feature scales with a minimal number of solution unknowns. The minisymposium talk will discuss these algorithmic developments and present sample calculations using our adaptive triangulation algorithm applied to various moving interface problems such as etching, deposition, and curvature flow.

  18. Large Terrain Modeling and Visualization for Planets

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher

    2011-01-01

    Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.

  19. Assessment of Students with Emotional and Behavioral Disorders

    ERIC Educational Resources Information Center

    Plotts, Cynthia A.

    2012-01-01

    Assessment and identification of children with emotional and behavioral disorders (EBD) is complex and involves multiple techniques, levels, and participants. While federal law sets the general parameters for identification in school settings, these criteria are vague and may lead to inconsistencies in selection and interpretation of assessment…

  20. Identification and description of the momentum effect in studies of learning: An abstract science concept

    NASA Astrophysics Data System (ADS)

    Kwon, Jae-Sool; Mayer, Victor J.

    Several studies of the validity of the intensive time series design have revealed a post-intervention increase in the level of achievement data. This so called momentum effect has not been demonstrated through the application of an appropriate analysis technique. The purpose of this study was to identify and apply a technique that would adequately represent and describe such an effect if indeed it does occur, and to use that technique to study the momentum effect as it is observed in several data sets on the learning of the concept of plate tectonics. Subsequent to trials of several different analyses, a segmented straight line regression analysis was chosen and used on three different data sets. Each set revealed similar patterns of inflection points between lines with similar time intervals between inflections for those data from students with formal cognitive tendencies. These results seem to indicate that this method will indeed be useful in representing and identifying the presence and duration of the momentum effect in time series data on achievement. Since the momentum effect could be described in each of the data sets and since its presence seems a function of similar circumstances, support is given for its presence in the learning of abstract scientific concepts for formal cognitive tendency students. The results indicate that the duration of the momentum effect is related to the level of student understanding tested and the cognitive level of the learners.

  1. Cases on Critical and Qualitative Perspectives in Online Higher Education

    ERIC Educational Resources Information Center

    Orleans, Myron, Ed.

    2014-01-01

    Online education continues to permeate mainstream teaching techniques in higher education settings. Teaching upper-level classes in an online setting is having a major impact on education as a whole and is fundamentally altering global learning. "Cases on Critical and Qualitative Perspectives in Online Higher Education" offers a…

  2. A High-Level Language for Modeling Algorithms and Their Properties

    NASA Astrophysics Data System (ADS)

    Akhtar, Sabina; Merz, Stephan; Quinson, Martin

    Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.

  3. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  5. Determining and broadening the definition of impact from implementing a rational priority setting approach in a healthcare organization.

    PubMed

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2014-08-01

    Techniques to manage scarce healthcare resources continue to evolve in response to changing, growing and competing demands. Yet there is no standard definition in the priority setting literature of what might constitute the desired impact or success of resource management activities. In this 2006-09 study, using action research methodology, we determined the impact of implementing a formal priority setting model, Program Budgeting and Marginal Analysis (PBMA), in a Canadian health authority. Qualitative data were collected through post year-1 (n = 12) and year-2 (n = 9) participant interviews, meeting observation and document review. Interviews were analyzed using a constant comparison technique to identify major themes. Impact can be defined as effects at three levels: system, group, and individual. System-level impact can be seen in the actual selection of priorities and resource re-allocation. In this case, participants prioritized a list of $760,000 worth of investment proposals and $38,000 of disinvestment proposals; however, there was no clear evidence as to whether financial resources were reallocated as a result. Group and individual impacts, less frequently reported in the literature, included changes in priority setting knowledge, attitudes and practice. PBMA impacts at these three levels were found to be interrelated. This work argues in favor of attempts to expand the definition of priority setting success by including both desired system-level outcomes like resource re-allocation and individual or group level impacts like changes to priority setting knowledge, attitudes and practice. These latter impacts are worth pursuing as they appear to be intrinsic to successful system-wide priority setting. A broader definition of PBMA impact may also suggest conceptualizing PBMA as both a priority setting approach and as a tool to develop individual and group priority setting knowledge and practice. These results should be of interest to researchers and decision makers using or considering a formal priority setting approach to manage scarce healthcare resources. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Multi person detection and tracking based on hierarchical level-set method

    NASA Astrophysics Data System (ADS)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  7. Architectural-level power estimation and experimentation

    NASA Astrophysics Data System (ADS)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.

  8. Conspicuity of renal calculi at unenhanced CT: effects of calculus composition and size and CT technique.

    PubMed

    Tublin, Mitchell E; Murphy, Michael E; Delong, David M; Tessler, Franklin N; Kliewer, Mark A

    2002-10-01

    To determine the effects of calculus size, composition, and technique (kilovolt and milliampere settings) on the conspicuity of renal calculi at unenhanced helical computed tomography (CT). The authors performed unenhanced CT of a phantom containing 188 renal calculi of varying size and chemical composition (brushite, cystine, struvite, weddellite, whewellite, and uric acid) at 24 combinations of four kilovolt (80-140 kV) and six milliampere (200-300 mA) levels. Two radiologists, who were unaware of the location and number of calculi, reviewed the CT images and recorded where stones were detected. These observations were compared with the known positions of calculi to generate true-positive and false-positive rates. Logistic regression analysis was performed to investigate the effects of stone size, composition, and technique and to generate probability estimates of detection. Interobserver agreement was estimated with kappa statistics. Interobserver agreement was high: the mean kappa value for the two observers was 0.86. The conspicuity of stone fragments increased with increasing kilovolt and milliampere levels for all stone types. At the highest settings (140 kV and 300 mA), the detection threshold size (ie, the size of calculus that had a 50% probability of being detected) ranged from 0.81 mm + 0.03 (weddellite) to 1.3 mm + 0.1 (uric acid). Detection threshold size for each type of calculus increased up to 1.17-fold at lower kilovolt settings and up to 1.08-fold at lower milliampere settings. The conspicuity of small renal calculi at CT increases with higher kilovolt and milliampere settings, with higher kilovolts being particularly important. Small uric acid calculi may be imperceptible, even with maximal CT technique.

  9. Military Curricula for Vocational & Technical Education. Still Photojournalism Techniques, 16-3.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. National Center for Research in Vocational Education.

    These instructor materials and student workbook and programmed texts for a secondary-postsecondary-level course in still photojournalism techniques are one of a number of military-developed curriculum packages selected for adaptation to vocational instruction and curriculum development in a civilian setting. Purpose stated for the 234-hour course…

  10. Strategies for Success: Classroom Teaching Techniques for Students with Learning Differences. Second Edition

    ERIC Educational Resources Information Center

    Meltzer, Lynn J.; Roditi, Bethany N.; Steinberg, Joan L.; Rafter Biddle, Kathleen; Taber, Susan E.; Boyle Caron, Kathleen; Kniffin, Leta

    2006-01-01

    Strategies for Success provides realistic and accessible teaching techniques for teachers, special educators, and other professionals working with students at the late elementary, middle, and early high school levels. This book is particularly useful for teachers working in inclusive settings. These strategies can help teachers to understand the…

  11. Mapping Topographic Structure in White Matter Pathways with Level Set Trees

    PubMed Central

    Kent, Brian P.; Rinaldo, Alessandro; Yeh, Fang-Cheng; Verstynen, Timothy

    2014-01-01

    Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees–which provide a concise representation of the hierarchical mode structure of probability density functions–offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output. PMID:24714673

  12. Techniques for development of safety-related software for surgical robots.

    PubMed

    Varley, P

    1999-12-01

    Regulatory bodies require evidence that software controlling potentially hazardous devices is developed to good manufacturing practices. Effective techniques used in other industries assume long timescales and high staffing levels and can be unsuitable for use without adaptation in developing electronic healthcare devices. This paper discusses a set of techniques used in practice to develop software for a particular innovative medical product, an endoscopic camera manipulator. These techniques include identification of potential hazards and tracing their mitigating factors through the project lifecycle.

  13. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  14. Polluted rainwater runoff from waste recovery and recycling companies: Determination of emission levels associated with the best available techniques.

    PubMed

    Huybrechts, D; Verachtert, E; Vander Aa, S; Polders, C; Van den Abeele, L

    2016-08-01

    Rainwater falling on outdoor storage areas of waste recovery and recycling companies becomes polluted via contact with the stored materials. It contains various pollutants, including heavy metals, polycyclic aromatic hydrocarbons and polychlorinated biphenyls, and is characterized by a highly fluctuating composition and flow rate. This polluted rainwater runoff is legally considered as industrial wastewater, and the polluting substances contained in the rainwater runoff at the point of discharge, are considered as emissions into water. The permitting authorities can set emission limit values (discharge limits) at the point of discharge. Best available techniques are an important reference point for setting emission limit values. In this paper, the emission levels associated with the best available techniques for dealing with polluted rainwater runoff from waste recovery and recycling companies were determined. The determination is based on an analysis of emission data measured at different companies in Flanders. The data show that a significant fraction of the pollution in rainwater runoff is associated with particles. A comparison with literature data provides strong indications that not only leaching, but also atmospheric deposition play an important role in the contamination of rainwater at waste recovery and recycling companies. The prevention of pollution and removal of suspended solids from rainwater runoff to levels below 60mg/l are considered as best available techniques. The associated emission levels were determined by considering only emission data from plants applying wastewater treatment, and excluding all samples with suspended solid levels >60mg/l. The resulting BAT-AEL can be used as a reference point for setting emission limit values for polluted rainwater runoff from waste recovery and recycling companies. Since the BAT-AEL (e.g. 150μg/l for Cu) are significantly lower than current emission levels (e.g. 300μg/l as the 90% percentile and 4910μg/l as the maximum level for Cu), this will result in a significant reduction in emissions into water. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  16. Standardized Mean Differences in Two-Level Cross-Classified Random Effects Models

    ERIC Educational Resources Information Center

    Lai, Mark H. C.; Kwok, Oi-Man

    2014-01-01

    Multilevel modeling techniques are becoming more popular in handling data with multilevel structure in educational and behavioral research. Recently, researchers have paid more attention to cross-classified data structure that naturally arises in educational settings. However, unlike traditional single-level research, methodological studies about…

  17. Modelling Subjectivity in Visual Perception of Orientation for Image Retrieval.

    ERIC Educational Resources Information Center

    Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.

    2003-01-01

    Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…

  18. Skull defect reconstruction based on a new hybrid level set.

    PubMed

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  19. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  20. Comparing supervised learning techniques on the task of physical activity recognition.

    PubMed

    Dalton, A; OLaighin, G

    2013-01-01

    The objective of this study was to compare the performance of base-level and meta-level classifiers on the task of physical activity recognition. Five wireless kinematic sensors were attached to each subject (n = 25) while they completed a range of basic physical activities in a controlled laboratory setting. Subjects were then asked to carry out similar self-annotated physical activities in a random order and in an unsupervised environment. A combination of time-domain and frequency-domain features were extracted from the sensor data including the first four central moments, zero-crossing rate, average magnitude, sensor cross-correlation, sensor auto-correlation, spectral entropy and dominant frequency components. A reduced feature set was generated using a wrapper subset evaluation technique with a linear forward search and this feature set was employed for classifier comparison. The meta-level classifier AdaBoostM1 with C4.5 Graft as its base-level classifier achieved an overall accuracy of 95%. Equal sized datasets of subject independent data and subject dependent data were used to train this classifier and high recognition rates could be achieved without the need for user specific training. Furthermore, it was found that an accuracy of 88% could be achieved using data from the ankle and wrist sensors only.

  1. Effective Padding of Multi-Dimensional Arrays to Avoid Cache Conflict Misses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Changwan; Bao, Wenlei; Cohen, Albert

    Caches are used to significantly improve performance. Even with high degrees of set-associativity, the number of accessed data elements mapping to the same set in a cache can easily exceed the degree of associativity, causing conflict misses and lowered performance, even if the working set is much smaller than cache capacity. Array padding (increasing the size of array dimensions) is a well known optimization technique that can reduce conflict misses. In this paper, we develop the first algorithms for optimal padding of arrays for a set associative cache for arbitrary tile sizes, In addition, we develop the first solution tomore » padding for nested tiles and multi-level caches. The techniques are in implemented in PAdvisor tool. Experimental results with multiple benchmarks demonstrate significant performance improvement from use of PAdvisor for padding.« less

  2. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  3. Learning in Stochastic Bit Stream Neural Networks.

    PubMed

    van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu

    1996-08-01

    This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd

  4. Research on the development of green chemistry technology assessment techniques: a material reutilization case.

    PubMed

    Hong, Seokpyo; Ahn, Kilsoo; Kim, Sungjune; Gong, Sungyong

    2015-01-01

    This study presents a methodology that enables a quantitative assessment of green chemistry technologies. The study carries out a quantitative evaluation of a particular case of material reutilization by calculating the level of "greenness" i.e., the level of compliance with the principles of green chemistry that was achieved by implementing a green chemistry technology. The results indicate that the greenness level was enhanced by 42% compared to the pre-improvement level, thus demonstrating the economic feasibility of green chemistry. The assessment technique established in this study will serve as a useful reference for setting the direction of industry-level and government-level technological R&D and for evaluating newly developed technologies, which can greatly contribute toward gaining a competitive advantage in the global market.

  5. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets

    PubMed Central

    2012-01-01

    Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695

  6. Application of level set method to optimal vibration control of plate structures

    NASA Astrophysics Data System (ADS)

    Ansari, M.; Khajepour, A.; Esmailzadeh, E.

    2013-02-01

    Vibration control plays a crucial role in many structures, especially in the lightweight ones. One of the most commonly practiced method to suppress the undesirable vibration of structures is to attach patches of the constrained layer damping (CLD) onto the surface of the structure. In order to consider the weight efficiency of a structure, the best shapes and locations of the CLD patches should be determined to achieve the optimum vibration suppression with minimum usage of the CLD patches. This paper proposes a novel topology optimization technique that can determine the best shape and location of the applied CLD patches, simultaneously. Passive vibration control is formulated in the context of the level set method, which is a numerical technique to track shapes and locations concurrently. The optimal damping set could be found in a structure, in its fundamental vibration mode, such that the maximum modal loss factor of the system is achieved. Two different plate structures will be considered and the damping patches will be optimally located on them. At the same time, the best shapes of the damping patches will be determined too. In one example, the numerical results will be compared with those obtained from the experimental tests to validate the accuracy of the proposed method. This comparison reveals the effectiveness of the level set approach in finding the optimum shape and location of the CLD patches.

  7. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  8. Improved Technique for Finding Vibration Parameters

    NASA Technical Reports Server (NTRS)

    Andrew, L. V.; Park, C. C.

    1986-01-01

    Filtering and sample manipulation reduce noise effects. Analysis technique improves extraction of vibrational frequencies and damping rates from measurements of vibrations of complicated structure. Structural vibrations measured by accelerometers. Outputs digitized at frequency high enough to cover all modes of interest. Use of method on set of vibrational measurements from Space Shuttle, raised level of coherence from previous values below 50 percent to values between 90 and 99 percent

  9. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    PubMed

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  10. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  11. Application of gray level mapping in computed tomographic colonography: a pilot study to compare with traditional surface rendering method for identification and differentiation of endoluminal lesions

    PubMed Central

    Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju

    2017-01-01

    Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483

  12. Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.

    NASA Technical Reports Server (NTRS)

    Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.

    2013-01-01

    We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic

  13. Method and system for assigning a confidence metric for automated determination of optic disc location

    DOEpatents

    Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN

    2012-07-10

    A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.

  14. Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Sethian, James A.

    1997-01-01

    Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.

  15. What Is Wrong with ANOVA and Multiple Regression? Analyzing Sentence Reading Times with Hierarchical Linear Models

    ERIC Educational Resources Information Center

    Richter, Tobias

    2006-01-01

    Most reading time studies using naturalistic texts yield data sets characterized by a multilevel structure: Sentences (sentence level) are nested within persons (person level). In contrast to analysis of variance and multiple regression techniques, hierarchical linear models take the multilevel structure of reading time data into account. They…

  16. A Handbook Related to Office Counseling Techniques for the College Level Sex Educator.

    ERIC Educational Resources Information Center

    Johnson, Anne D.

    This handbook is designed for use by the college level sex educator who attempts preventative counseling related to sexual dysfunction in an office setting. It contains a brief review of current literature related to coitus among single youth which reveals the sociopsychological context of behavior. This discussion includes incidence of coitus…

  17. Using Performance Measures to Allocate Consumable Funding

    DTIC Science & Technology

    2007-06-01

    The Air Force is now using the Customer Oriented Leveling Technique (COLT) to determine levels for consumable items at its bases. COLT is an...Overview • COLT is a system to set AF retail stock levels for DLA-managed consumable parts to minimize expected customer wait time (ECWT) • COLT...changed please list both.) Original title on 712 A/B: Using Performance Measures to Allocate Consumable Funding If the title was revised

  18. Fluctuations in alliance and use of techniques over time: A bidirectional relation between use of "common factors" techniques and the development of the working alliance.

    PubMed

    Solomonov, Nili; McCarthy, Kevin S; Keefe, John R; Gorman, Bernard S; Blanchard, Mark; Barber, Jacques P

    2018-01-01

    The aim of this study was twofold: (a) Investigate whether therapists are consistent in their use of therapeutic techniques throughout supportive-expressive therapy (SET) and (b) Examine the bi-directional relation between therapists' use of therapeutic techniques and the working alliance over the course of SET. Thirty-seven depressed patients were assigned to 16 weeks of SET as part of a larger randomized clinical trial (Barber, Barrett, Gallop, Rynn, & Rickels, ). Working Alliance Inventory-Short Form (WAI-SF) was collected at Weeks 2, 4, and 8. Use of therapeutic interventions was rated by independent observers using the Multitheoretical List of Therapeutic Interventions (MULTI). Intraclass correlation coefficients assessed therapists' consistency in use of techniques. A cross-lagged path analysis estimated the working alliance inventory- Multitheoretical List of Therapeutic Interventions bidirectional relation across time. Therapists were moderately consistent in their use of prescribed techniques (psychodynamic, process-experiential, and person-centred). However, they were inconsistent, or more flexible, in their use of "common factors" techniques (e.g., empathy, active listening, hope, and encouragements). A positive bidirectional relation was found between use of common factors techniques and the working alliance, such that initial high levels of common factors (but not prescribed) techniques predicted higher alliance later on and vice versa. Therapists tend to modulate their use of common factors techniques across treatment. Additionally, when a strong working alliance is developed early in treatment, therapists tend to use more common factors later on. Moreover, high use of common factors techniques is predictive of later improvement in the alliance. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Automatic exposure control systems designed to maintain constant image noise: effects on computed tomography dose and noise relative to clinically accepted technique charts.

    PubMed

    Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; Kofler, James M; McCollough, Cynthia H

    2015-01-01

    To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise-based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects.

  20. Does technique matter; a pilot study exploring weighting techniques for a multi-criteria decision support framework.

    PubMed

    van Til, Janine; Groothuis-Oudshoorn, Catharina; Lieferink, Marijke; Dolan, James; Goetghebeur, Mireille

    2014-01-01

    There is an increased interest in the use of multi-criteria decision analysis (MCDA) to support regulatory and reimbursement decision making. The EVIDEM framework was developed to provide pragmatic multi-criteria decision support in health care, to estimate the value of healthcare interventions, and to aid in priority-setting. The objectives of this study were to test 1) the influence of different weighting techniques on the overall outcome of an MCDA exercise, 2) the discriminative power in weighting different criteria of such techniques, and 3) whether different techniques result in similar weights in weighting the criteria set proposed by the EVIDEM framework. A sample of 60 Dutch and Canadian students participated in the study. Each student used an online survey to provide weights for 14 criteria with two different techniques: a five-point rating scale and one of the following techniques selected randomly: ranking, point allocation, pairwise comparison and best worst scaling. The results of this study indicate that there is no effect of differences in weights on value estimates at the group level. On an individual level, considerable differences in criteria weights and rank order occur as a result of the weight elicitation method used, and the ability of different techniques to discriminate in criteria importance. Of the five techniques tested, the pair-wise comparison of criteria has the highest ability to discriminate in weights when fourteen criteria are compared. When weights are intended to support group decisions, the choice of elicitation technique has negligible impact on criteria weights and the overall value of an innovation. However, when weights are used to support individual decisions, the choice of elicitation technique influences outcome and studies that use dissimilar techniques cannot be easily compared. Weight elicitation through pairwise comparison of criteria is preferred when taking into account its superior ability to discriminate between criteria and respondents' preferences.

  1. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    NASA Astrophysics Data System (ADS)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  2. Cross-cultural dataset for the evolution of religion and morality project.

    PubMed

    Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph

    2016-11-08

    A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set.

  3. Cross-cultural dataset for the evolution of religion and morality project

    PubMed Central

    Purzycki, Benjamin Grant; Apicella, Coren; Atkinson, Quentin D.; Cohen, Emma; McNamara, Rita Anne; Willard, Aiyana K.; Xygalatas, Dimitris; Norenzayan, Ara; Henrich, Joseph

    2016-01-01

    A considerable body of research cross-culturally examines the evolution of religious traditions, beliefs and behaviors. The bulk of this research, however, draws from coded qualitative ethnographies rather than from standardized methods specifically designed to measure religious beliefs and behaviors. Psychological data sets that examine religious thought and behavior in controlled conditions tend to be disproportionately sampled from student populations. Some cross-national databases employ standardized methods at the individual level, but are primarily focused on fully market integrated, state-level societies. The Evolution of Religion and Morality Project sought to generate a data set that systematically probed individual level measures sampling across a wider range of human populations. The set includes data from behavioral economic experiments and detailed surveys of demographics, religious beliefs and practices, material security, and intergroup perceptions. This paper describes the methods and variables, briefly introduces the sites and sampling techniques, notes inconsistencies across sites, and provides some basic reporting for the data set. PMID:27824332

  4. Evaluating the effect of river restoration techniques on reducing the impacts of outfall on water quality

    NASA Astrophysics Data System (ADS)

    Mant, Jenny; Janes, Victoria; Terrell, Robert; Allen, Deonie; Arthur, Scott; Yeakley, Alan; Morse, Jennifer; Holman, Ian

    2015-04-01

    Outfalls represent points of discharge to a river and often contain pollutants from urban runoff, such as heavy metals. Additionally, erosion around the outfall site results in increased sediment generation and the release of associated pollutants. Water quality impacts from heavy metals pose risks to the river ecosystem (e.g. toxicity to aquatic habitats). Restoration techniques including establishment of swales, and the re-vegetation and reinforcement of channel banks aim to decrease outfall flow velocities resulting in deposition of pollutants and removal through plant uptake. Within this study the benefits of river restoration techniques for the removal of contaminants associated with outfalls have been quantified within Johnson Creek, Portland, USA as part of the EPSRC funded Blue-Green Cities project. The project aims to develop new strategies for protecting hydrological and ecological values of urban landscapes. A range of outfalls have been selected which span restored and un-restored channel reaches, a variety of upstream land-uses, and both direct and set-back outfalls. River Habitat Surveys were conducted at each of the sites to assess the level of channel modification within the reach. Sediment samples were taken at the outfall location, upstream, and downstream of outfalls for analysis of metals including Nickel, Lead, Zinc, Copper, Iron and Magnesium. These were used to assess the impact of the level of modification at individual sites, and to compare the influence of direct and set-back outfalls. Concentrations of all metals in the sediments found at outfalls generally increased with the level of modification at the site. Sediment in restored sites had lower metal concentrations both at the outfall and downstream compared to unrestored sites, indicating the benefit of these techniques to facilitate the effective removal of pollutants by trapping of sediment and uptake of contaminants by vegetation. However, the impact of restoration measures varied between metal types. Restored sites also showed lower variability in metal concentrations than un-restored sites, which is linked to greater bank stability and hence lower bank erosion rates within restored sites as eroding banks were noted to be a source of metal contaminants. The success of pollutant removal by set-back outfalls was varied due to additional factors including the distance between the set-back outfall and the main channel, vegetation type, density and age. The study highlights the ability of restoration techniques to reduce metal contaminant concentrations at outfalls, and provides an indication of the potential benefits from wider application of similar techniques.

  5. Use of Biodescriptors and Chemodescriptors in Predictive Toxicology: A Mathematical/Computational Approach

    DTIC Science & Technology

    2005-01-01

    proteomic gel analyses. The research group has explored the use of chemodescriptors calculated using high-level ab initio quantum chemical basis sets...descriptors that characterize the entire proteomics map, local descriptors that characterize a subset of the proteins present in the gel, and spectrum...techniques for analyzing the full set of proteins present in a proteomics map. 14. SUBJECT TERMS 1S. NUMBER OF PAGES Topological indices

  6. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  7. An example of requirements for Advanced Subsonic Civil Transport (ASCT) flight control system using structured techniques

    NASA Technical Reports Server (NTRS)

    Mclees, Robert E.; Cohen, Gerald C.

    1991-01-01

    The requirements are presented for an Advanced Subsonic Civil Transport (ASCT) flight control system generated using structured techniques. The requirements definition starts from initially performing a mission analysis to identify the high level control system requirements and functions necessary to satisfy the mission flight. The result of the study is an example set of control system requirements partially represented using a derivative of Yourdon's structured techniques. Also provided is a research focus for studying structured design methodologies and in particular design-for-validation philosophies.

  8. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  9. Level set formulation of two-dimensional Lagrangian vortex detection methods

    NASA Astrophysics Data System (ADS)

    Hadjighasem, Alireza; Haller, George

    2016-10-01

    We propose here the use of the variational level set methodology to capture Lagrangian vortex boundaries in 2D unsteady velocity fields. This method reformulates earlier approaches that seek material vortex boundaries as extremum solutions of variational problems. We demonstrate the performance of this technique for two different variational formulations built upon different notions of coherence. The first formulation uses an energy functional that penalizes the deviation of a closed material line from piecewise uniform stretching [Haller and Beron-Vera, J. Fluid Mech. 731, R4 (2013)]. The second energy function is derived for a graph-based approach to vortex boundary detection [Hadjighasem et al., Phys. Rev. E 93, 063107 (2016)]. Our level-set formulation captures an a priori unknown number of vortices simultaneously at relatively low computational cost. We illustrate the approach by identifying vortices from different coherence principles in several examples.

  10. Two-way coupled SPH and particle level set fluid simulation.

    PubMed

    Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald

    2008-01-01

    Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.

  11. Perioperative Acupuncture and Related Techniques

    PubMed Central

    Chernyak, Grigory V.; Sessler, Daniel I.

    2005-01-01

    Acupuncture and related techniques are increasingly practiced in conventional medical settings, and the number of patients willing to use these techniques is increasing. Despite more than 30 years of research, the exact mechanism of action and efficacy of acupuncture have not been established. Furthermore, most aspects of acupuncture have yet to be adequately tested. There thus remains considerable controversy about the role of acupuncture in clinical medicine. Acupuncture apparently does not reduce volatile anesthetic requirement by a clinically important amount. However, preoperative sedation seems to be a promising application of acupuncture in perioperative settings. Acupuncture may be effective for postoperative pain relief but requires a high level of expertise by the acupuncture practitioner. Acupuncture and related techniques can be used for treatment and prophylaxis of postoperative nausea and vomiting in routine clinical practice in combination with, or as an alternative to, conventional antiemetics when administered before induction of general anesthesia. Summary Statement: The use of acupuncture for perioperative analgesia, nausea and vomiting, sedation, anesthesia, and complications is reviewed. PMID:15851892

  12. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  13. Optic disc segmentation for glaucoma screening system using fundus images.

    PubMed

    Almazroa, Ahmed; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head pathologies such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of optic nerve head abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique was applied. As well an important contribution was to involve the variations in opinions among the ophthalmologists in detecting the disc boundaries and diagnosing the glaucoma. Most of the previous studies were trained and tested based on only one opinion, which can be assumed to be biased for the ophthalmologist. In addition, the accuracy was calculated based on the number of images that coincided with the ophthalmologists' agreed-upon images, and not only on the overlapping images as in previous studies. The ultimate goal of this project is to develop an automated image processing system for glaucoma screening. The disc algorithm is evaluated using a new retinal fundus image dataset called RIGA (retinal images for glaucoma analysis). In the case of low-quality images, a double level set was applied, in which the first level set was considered to be localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as the agreement among the manual markings of six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid was 83.9%, and the best agreement was observed between the results of the algorithm and manual markings in 379 images.

  14. Reprocessing the Historical Satellite Passive Microwave Record at Enhanced Spatial Resolutions using Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.; Paget, A. C.; Armstrong, R. L.

    2015-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Currently available global gridded passive microwave data sets serve a diverse community of hundreds of data users, but do not meet many requirements of modern Earth System Data Records (ESDRs) or Climate Data Records (CDRs), most notably in the areas of intersensor calibration, quality-control, provenance and consistent processing methods. The original gridding techniques were relatively primitive and were produced on 25 km grids using the original EASE-Grid definition that is not easily accommodated in modern software packages. Further, since the first Level 3 data sets were produced, the Level 2 passive microwave data on which they were based have been reprocessed as Fundamental CDRs (FCDRs) with improved calibration and documentation. We are funded by NASA MEaSUREs to reprocess the historical gridded data sets as EASE-Grid 2.0 ESDRs, using the most mature available Level 2 satellite passive microwave (SMMR, SSM/I-SSMIS, AMSR-E) records from 1978 to the present. We have produced prototype data from SSM/I and AMSR-E for the year 2003, for review and feedback from our Early Adopter user community. The prototype data set includes conventional, low-resolution ("drop-in-the-bucket" 25 km) grids and enhanced-resolution grids derived from the two candidate image reconstruction techniques we are evaluating: 1) Backus-Gilbert (BG) interpolation and 2) a radiometer version of Scatterometer Image Reconstruction (SIR). We summarize our temporal subsetting technique, algorithm tuning parameters and computational costs, and include sample SSM/I images at enhanced resolutions of up to 3 km. We are actively working with our Early Adopters to finalize content and format of this new, consistently-processed high-quality satellite passive microwave ESDR.

  15. Hierarchical classification method and its application in shape representation

    NASA Astrophysics Data System (ADS)

    Ireton, M. A.; Oakley, John P.; Xydeas, Costas S.

    1992-04-01

    In this paper we describe a technique for performing shaped-based content retrieval of images from a large database. In order to be able to formulate such user-generated queries about visual objects, we have developed an hierarchical classification technique. This hierarchical classification technique enables similarity matching between objects, with the position in the hierarchy signifying the level of generality to be used in the query. The classification technique is unsupervised, robust, and general; it can be applied to any suitable parameter set. To establish the potential of this classifier for aiding visual querying, we have applied it to the classification of the 2-D outlines of leaves.

  16. System-Level Radiation Hardening

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2014-01-01

    Although system-level radiation hardening can enable the use of high-performance components and enhance the capabilities of a spacecraft, hardening techniques can be costly and can compromise the very performance designers sought from the high-performance components. Moreover, such techniques often result in a complicated design, especially if several complex commercial microcircuits are used, each posing its own hardening challenges. The latter risk is particularly acute for Commercial-Off-The-Shelf components since high-performance parts (e.g. double-data-rate synchronous dynamic random access memories - DDR SDRAMs) may require other high-performance commercial parts (e.g. processors) to support their operation. For these reasons, it is essential that system-level radiation hardening be a coordinated effort, from setting requirements through testing up to and including validation.

  17. Kinematic Labs with Mobile Devices

    NASA Astrophysics Data System (ADS)

    Kinser, Jason M.

    2015-07-01

    This book provides 13 labs spanning the common topics in the first semester of university-level physics. Each lab is designed to use only the student's smartphone, laptop and items easily found in big-box stores or a hobby shop. Each lab contains theory, set-up instructions and basic analysis techniques. All of these labs can be performed outside of the traditional university lab setting and initial costs averaging less than 8 per student, per lab.

  18. The Effect of Different Mixing Methods on Working Time, Setting Time, Dimensional Changes and Film Thickness of Mineral Trioxide Aggregate and Calcium-Enriched Mixture.

    PubMed

    Shahi, Shahriar; Ghasemi, Negin; Rahimi, Saeed; Yavari, Hamidreza; Janani, Maryam; Mokhtari, Hadi; Bahari, Mahmood; Rabbani, Parastu

    2015-01-01

    The aim of the present study was to evaluate the effect of different mixing techniques (conventional, amalgamator and ultrasonic mixing) on the physical properties the working time (WT), setting time (ST), dimensional changes (DC) and film thickness (FT)] of calcium-enriched mixture (CEM) cement and mineral trioxide aggregate (MTA). The mentioned physical properties were determined using the ISO 6786:2001 specification. Six samples of each material were prepared for three mixing techniques (totally 36 samples). Data were analyzed using descriptive statistics, two-way ANOVA and Post Hoc Tukey's tests. The level of significance was defined at 0.05. Irrespective of mixing technique, there was no significant difference between the WT and FT of the tested materials. Except for the DC of MTA and the FT of the all materials, other properties were significantly affected with mixing techniques (P<0.05). The ultrasonic technique decreased the ST of MTA and CEM cement and increased the WT of CEM cement (P<0.05). The mixing technique of the materials had no significant effect on the dimensional changes of MTA and the film thickness of both materials.

  19. Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.

    PubMed

    Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L

    2005-05-01

    This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.

  20. Volume growth trends in a Douglas-fir levels-of-growing-stock study.

    Treesearch

    Robert O. Curtis

    2006-01-01

    Mean curves of increment and yield in gross total cubic volume and net merchantable cubic volume were derived from seven installations of the regional cooperative Levels-of-Growing-Stock Study (LOGS) in Douglas-fir. The technique used reduces the seven curves for each treatment for each variable of interest to a single set of readily interpretable mean curves. To a top...

  1. Automatic Exposure Control Systems Designed to Maintain Constant Image Noise: Effects on Computed Tomography Dose and Noise Relative to Clinically Accepted Technique Charts

    PubMed Central

    Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; Kofler, James M.; McCollough, Cynthia H.

    2015-01-01

    Objective To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. Materials and Methods A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. Results For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise–based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Conclusions Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects. PMID:25938214

  2. Evaluation of the Andromas Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry System for Identification of Aerobically Growing Gram-Positive Bacilli

    PubMed Central

    Farfour, E.; Leto, J.; Barritault, M.; Barberis, C.; Meyer, J.; Dauphin, B.; Le Guern, A.-S.; Leflèche, A.; Badell, E.; Guiso, N.; Leclercq, A.; Le Monnier, A.; Lecuit, M.; Rodriguez-Nava, V.; Bergeron, E.; Raymond, J.; Vimont, S.; Bille, E.; Carbonnelle, E.; Guet-Revillet, H.; Lécuyer, H.; Beretti, J.-L.; Vay, C.; Berche, P.; Ferroni, A.; Nassif, X.

    2012-01-01

    Matrix-associated laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) is a rapid and simple microbial identification method. Previous reports using the Biotyper system suggested that this technique requires a preliminary extraction step to identify Gram-positive rods (GPRs), a technical issue that may limit the routine use of this technique to identify pathogenic GPRs in the clinical setting. We tested the accuracy of the MALDI-TOF MS Andromas strategy to identify a set of 659 GPR isolates representing 16 bacterial genera and 72 species by the direct colony method. This bacterial collection included 40 C. diphtheriae, 13 C. pseudotuberculosis, 19 C. ulcerans, and 270 other Corynebacterium isolates, 32 L. monocytogenes and 24 other Listeria isolates, 46 Nocardia, 75 Actinomyces, 18 Actinobaculum, 11 Propionibacterium acnes, 18 Propionibacterium avidum, 30 Lactobacillus, 21 Bacillus, 2 Rhodococcus equi, 2 Erysipelothrix rhusiopathiae, and 38 other GPR isolates, all identified by reference techniques. Totals of 98.5% and 1.2% of non-Listeria GPR isolates were identified to the species or genus level, respectively. Except for L. grayi isolates that were identified to the species level, all other Listeria isolates were identified to the genus level because of highly similar spectra. These data demonstrate that rapid identification of pathogenic GPRs can be obtained without an extraction step by MALDI-TOF mass spectrometry. PMID:22692743

  3. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  4. DERMAL DRUG LEVELS OF ANTIBIOTIC (CEPHALEXIN) DETERMINED BY ELECTROPORATION AND TRANSCUTANEOUS SAMPLING (ETS) TECHNIQUE

    PubMed Central

    Sammeta, SM; Vaka, SRK; Murthy, S. Narasimha

    2009-01-01

    The purpose of this project was to assess the validity of a novel “Electroporation and transcutaneous sampling (ETS)” technique for sampling cephalexin from the dermal extracellular fluid (ECF). This work also investigated the plausibility of using cephalexin levels in the dermal ECF as a surrogate for the drug level in the synovial fluid. In vitro and in vivo studies were carried out using hair less rats to assess the workability of ETS. Cephalexin (20mg/kg) was administered i.v. through tail vein and the time course of drug concentration in the plasma was determined. In the same rats, cephalexin concentration in the dermal ECF was determined by ETS and microdialysis techniques. In a separate set of rats, only intraarticular microdialysis was carried out determine the time course of cephalexin concentration in synovial fluid. The drug concentration in the dermal ECF determined by ETS and microdialysis did not differ significantly from each other and so as were the pharmacokinetic parameters. The results provide validity to the ETS technique. Further, there was a good correlation (~0.9) between synovial fluid and dermal ECF levels of cephalexin indicating that dermal ECF levels could be used as a potential surrogate for cephalexin concentration in the synovial fluid. PMID:19067398

  5. Instructional multimedia: An investigation of student and instructor attitudes and student study behavior

    PubMed Central

    2011-01-01

    Background Educators in allied health and medical education programs utilize instructional multimedia to facilitate psychomotor skill acquisition in students. This study examines the effects of instructional multimedia on student and instructor attitudes and student study behavior. Methods Subjects consisted of 45 student physical therapists from two universities. Two skill sets were taught during the course of the study. Skill set one consisted of knee examination techniques and skill set two consisted of ankle/foot examination techniques. For each skill set, subjects were randomly assigned to either a control group or an experimental group. The control group was taught with live demonstration of the examination skills, while the experimental group was taught using multimedia. A cross-over design was utilized so that subjects in the control group for skill set one served as the experimental group for skill set two, and vice versa. During the last week of the study, students and instructors completed written questionnaires to assess attitude toward teaching methods, and students answered questions regarding study behavior. Results There were no differences between the two instructional groups in attitudes, but students in the experimental group for skill set two reported greater study time alone compared to other groups. Conclusions Multimedia provides an efficient method to teach psychomotor skills to students entering the health professions. Both students and instructors identified advantages and disadvantages for both instructional techniques. Reponses relative to instructional multimedia emphasized efficiency, processing level, autonomy, and detail of instruction compared to live presentation. Students and instructors identified conflicting views of instructional detail and control of the content. PMID:21693058

  6. Instructional multimedia: an investigation of student and instructor attitudes and student study behavior.

    PubMed

    Smith, A Russell; Cavanaugh, Cathy; Moore, W Allen

    2011-06-21

    Educators in allied health and medical education programs utilize instructional multimedia to facilitate psychomotor skill acquisition in students. This study examines the effects of instructional multimedia on student and instructor attitudes and student study behavior. Subjects consisted of 45 student physical therapists from two universities. Two skill sets were taught during the course of the study. Skill set one consisted of knee examination techniques and skill set two consisted of ankle/foot examination techniques. For each skill set, subjects were randomly assigned to either a control group or an experimental group. The control group was taught with live demonstration of the examination skills, while the experimental group was taught using multimedia. A cross-over design was utilized so that subjects in the control group for skill set one served as the experimental group for skill set two, and vice versa. During the last week of the study, students and instructors completed written questionnaires to assess attitude toward teaching methods, and students answered questions regarding study behavior. There were no differences between the two instructional groups in attitudes, but students in the experimental group for skill set two reported greater study time alone compared to other groups. Multimedia provides an efficient method to teach psychomotor skills to students entering the health professions. Both students and instructors identified advantages and disadvantages for both instructional techniques. Reponses relative to instructional multimedia emphasized efficiency, processing level, autonomy, and detail of instruction compared to live presentation. Students and instructors identified conflicting views of instructional detail and control of the content.

  7. Pinch technique and the Batalin-Vilkovisky formalism

    NASA Astrophysics Data System (ADS)

    Binosi, Daniele; Papavassiliou, Joannis

    2002-07-01

    In this paper we take the first step towards a nondiagrammatic formulation of the pinch technique. In particular we proceed into a systematic identification of the parts of the one-loop and two-loop Feynman diagrams that are exchanged during the pinching process in terms of unphysical ghost Green's functions; the latter appear in the standard Slavnov-Taylor identity satisfied by the tree-level and one-loop three-gluon vertex. This identification allows for the consistent generalization of the intrinsic pinch technique to two loops, through the collective treatment of entire sets of diagrams, instead of the laborious algebraic manipulation of individual graphs, and sets up the stage for the generalization of the method to all orders. We show that the task of comparing the effective Green's functions obtained by the pinch technique with those computed in the background field method Feynman gauge is significantly facilitated when employing the powerful quantization framework of Batalin and Vilkovisky. This formalism allows for the derivation of a set of useful nonlinear identities, which express the background field method Green's functions in terms of the conventional (quantum) ones and auxiliary Green's functions involving the background source and the gluonic antifield; these latter Green's functions are subsequently related by means of a Schwinger-Dyson type of equation to the ghost Green's functions appearing in the aforementioned Slavnov-Taylor identity.

  8. Online monitoring of oil film using electrical capacitance tomography and level set method.

    PubMed

    Xue, Q; Sun, B Y; Cui, Z Q; Ma, M; Wang, H X

    2015-08-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online.

  9. Online monitoring of oil film using electrical capacitance tomography and level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Q., E-mail: xueqian@tju.edu.cn; Ma, M.; Sun, B. Y.

    2015-08-15

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for onlinemore » monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online.« less

  10. Method of optimization onboard communication network

    NASA Astrophysics Data System (ADS)

    Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.

    2018-02-01

    In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.

  11. Numerical Simulation of Dynamic Contact Angles and Contact Lines in Multiphase Flows using Level Set Method

    NASA Astrophysics Data System (ADS)

    Pendota, Premchand

    Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.

  12. Clinical utility of multiplex ligation-dependent probe amplification technique in identification of aetiology of unexplained mental retardation: a study in 203 Indian patients.

    PubMed

    Boggula, Vijay R; Shukla, Anju; Danda, Sumita; Hariharan, Sankar V; Nampoothiri, Sheela; Kumar, Rashmi; Phadke, Shubha R

    2014-01-01

    Developmental delay (DD)/mental retardation also described as intellectual disability (ID), is seen in 1-3 per cent of general population. Diagnosis continues to be a challenge at clinical level. With the advancement of new molecular cytogenetic techniques such as cytogenetic microarray (CMA), multiplex ligation-dependent probe amplification (MLPA) techniques, many microdeletion/microduplication syndromes with DD/ID are now delineated. MLPA technique can probe 40-50 genomic regions in a single reaction and is being used for evaluation of cases with DD/ID. In this study we evaluated the clinical utility of MLPA techniques with different probe sets to identify the aetiology of unexplained mental retardation in patients with ID/DD. A total of 203 randomly selected DD/ID cases with/without malformations were studied. MLPA probe sets for subtelomeric regions (P070/P036) and common microdeletions/microduplications (P245-A2) and X-chromosome (P106) were used. Positive cases with MLPA technique were confirmed using either fluorescence in situ hybridization (FISH) or follow up confirmatory MLPA probe sets. The overall detection rate was found to be 9.3 per cent (19 out of 203). The detection rates were 6.9 and 7.4 per cent for common microdeletion/microduplication and subtelomeric probe sets, respectively. No abnormality was detected with probe set for X-linked ID. The subtelomeric abnormalities detected included deletions of 1p36.33, 4p, 5p, 9p, 9q, 13q telomeric regions and duplication of 9pter. The deletions/duplications detected in non telomeric regions include regions for Prader Willi/Angelman regions, Williams syndrome, Smith Magenis syndrome and Velocardiofacial syndrome. Our results show that the use of P245-A2 and P070/P036-E1 probes gives good diagnostic yield. Though MLPA cannot probe the whole genome like cytogenetic microarray, due to its ease and relative low cost it is an important technique for evaluation of cases with DD/ID.

  13. From papers to practices: district level priority setting processes and criteria for family planning, maternal, newborn and child health interventions in Tanzania.

    PubMed

    Chitama, Dereck; Baltussen, Rob; Ketting, Evert; Kamazima, Switbert; Nswilla, Anna; Mujinja, Phares G M

    2011-10-21

    Successful priority setting is increasingly known to be an important aspect in achieving better family planning, maternal, newborn and child health (FMNCH) outcomes in developing countries. However, far too little attention has been paid to capturing and analysing the priority setting processes and criteria for FMNCH at district level. This paper seeks to capture and analyse the priority setting processes and criteria for FMNCH at district level in Tanzania. Specifically, we assess the FMNCH actor's engagement and understanding, the criteria used in decision making and the way criteria are identified, the information or evidence and tools used to prioritize FMNCH interventions at district level in Tanzania. We conducted an exploratory study mixing both qualitative and quantitative methods to capture and analyse the priority setting for FMNCH at district level, and identify the criteria for priority setting. We purposively sampled the participants to be included in the study. We collected the data using the nominal group technique (NGT), in-depth interviews (IDIs) with key informants and documentary review. We analysed the collected data using both content analysis for qualitative data and correlation analysis for quantitative data. We found a number of shortfalls in the district's priority setting processes and criteria which may lead to inefficient and unfair priority setting decisions in FMNCH. In addition, participants identified the priority setting criteria and established the perceived relative importance of the identified criteria. However, we noted differences exist in judging the relative importance attached to the criteria by different stakeholders in the districts. In Tanzania, FMNCH contents in both general development policies and sector policies are well articulated. However, the current priority setting process for FMNCH at district levels are wanting in several aspects rendering the priority setting process for FMNCH inefficient and unfair (or unsuccessful). To improve district level priority setting process for the FMNCH interventions, we recommend a fundamental revision of the current FMNCH interventions priority setting process. The improvement strategy should utilize rigorous research methods combining both normative and empirical methods to further analyze and correct past problems at the same time use the good practices to improve the current priority setting process for FMNCH interventions. The suggested improvements might give room for efficient and fair (or successful) priority setting process for FMNCH interventions.

  14. Intelligent control of mixed-culture bioprocesses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoner, D.L.; Larsen, E.D.; Miller, K.S.

    A hierarchical control system is being developed and applied to a mixed culture bioprocess in a continuous stirred tank reactor. A bioreactor, with its inherent complexity and non-linear behavior was an interesting, yet, difficult application for control theory. The bottom level of the hierarchy was implemented as a number of integrated set point controls and data acquisition modules. Within the second level was a diagnostic system that used expert knowledge to determine the operational status of the sensors, actuators, and control modules. A diagnostic program was successfully implemented for the detection of stirrer malfunctions, and to monitor liquid delivery ratesmore » and recalibrate the pumps when deviations from desired flow rates occurred. The highest control level was a supervisory shell that was developed using expert knowledge and the history of the reactor operation to determine the set points required to meet a set of production criteria. At this stage the supervisory shell analyzed the data to determine the state of the system. In future implementations, this shell will determine the set points required to optimize a cost function using expert knowledge and adaptive learning techniques.« less

  15. Using Multilevel Modeling in Language Assessment Research: A Conceptual Introduction

    ERIC Educational Resources Information Center

    Barkaoui, Khaled

    2013-01-01

    This article critiques traditional single-level statistical approaches (e.g., multiple regression analysis) to examining relationships between language test scores and variables in the assessment setting. It highlights the conceptual, methodological, and statistical problems associated with these techniques in dealing with multilevel or nested…

  16. Improving K-12 STEM Education Outcomes through Technological Integration

    ERIC Educational Resources Information Center

    Urban, Michael J., Ed.; Falvo, David A., Ed.

    2016-01-01

    The application of technology in classroom settings has equipped educators with innovative tools and techniques for effective teaching practice. Integrating digital technologies at the elementary and secondary levels helps to enrich the students' learning experience and maximize competency in the areas of science, technology, engineering, and…

  17. Dutch Young Adults Ratings of Behavior Change Techniques Applied in Mobile Phone Apps to Promote Physical Activity: A Cross-Sectional Survey.

    PubMed

    Belmon, Laura S; Middelweerd, Anouk; Te Velde, Saskia J; Brug, Johannes

    2015-11-12

    Interventions delivered through new device technology, including mobile phone apps, appear to be an effective method to reach young adults. Previous research indicates that self-efficacy and social support for physical activity and self-regulation behavior change techniques (BCT), such as goal setting, feedback, and self-monitoring, are important for promoting physical activity; however, little is known about evaluations by the target population of BCTs applied to physical activity apps and whether these preferences are associated with individual personality characteristics. This study aimed to explore young adults' opinions regarding BCTs (including self-regulation techniques) applied in mobile phone physical activity apps, and to examine associations between personality characteristics and ratings of BCTs applied in physical activity apps. We conducted a cross-sectional online survey among healthy 18 to 30-year-old adults (N=179). Data on participants' gender, age, height, weight, current education level, living situation, mobile phone use, personality traits, exercise self-efficacy, exercise self-identity, total physical activity level, and whether participants met Dutch physical activity guidelines were collected. Items for rating BCTs applied in physical activity apps were selected from a hierarchical taxonomy for BCTs, and were clustered into three BCT categories according to factor analysis: "goal setting and goal reviewing," "feedback and self-monitoring," and "social support and social comparison." Most participants were female (n=146), highly educated (n=169), physically active, and had high levels of self-efficacy. In general, we observed high ratings of BCTs aimed to increase "goal setting and goal reviewing" and "feedback and self-monitoring," but not for BCTs addressing "social support and social comparison." Only 3 (out of 16 tested) significant associations between personality characteristics and BCTs were observed: "agreeableness" was related to more positive ratings of BCTs addressing "goal setting and goal reviewing" (OR 1.61, 95% CI 1.06-2.41), "neuroticism" was related to BCTs addressing "feedback and self-monitoring" (OR 0.76, 95% CI 0.58-1.00), and "exercise self-efficacy" was related to a high rating of BCTs addressing "feedback and self-monitoring" (OR 1.06, 95% CI 1.02-1.11). No associations were observed between personality characteristics (ie, personality, exercise self-efficacy, exercise self-identity) and participants' ratings of BCTs addressing "social support and social comparison." Young Dutch physically active adults rate self-regulation techniques as most positive and techniques addressing social support as less positive among mobile phone apps that aim to promote physical activity. Such ratings of BCTs differ according to personality traits and exercise self-efficacy. Future research should focus on which behavior change techniques in app-based interventions are most effective to increase physical activity.

  18. Quantum computational studies, spectroscopic (FT-IR, FT-Raman and UV-Vis) profiling, natural hybrid orbital and molecular docking analysis on 2,4 Dibromoaniline

    NASA Astrophysics Data System (ADS)

    Abraham, Christina Susan; Prasana, Johanan Christian; Muthu, S.; Rizwana B, Fathima; Raja, M.

    2018-05-01

    The research exploration will comprise of investigating the molecular structure, vibrational assignments, bonding and anti-bonding nature, nonlinear optical, electronic and thermodynamic nature of the molecule. The research is conducted at two levels: First level employs the spectroscopic techniques - FT-IR, FT-Raman and UV-Vis characterizing techniques; at second level the data attained experimentally is analyzed through theoretical methods using and Density Function Theories which involves the basic principle of solving the Schrodinger equation for many body systems. A comparison is drawn between the two levels and discussed. The probability of the title molecule being bio-active theoretically proved by the electrophilicity index leads to further property analyzes of the molecule. The target molecule is found to fit well with Centromere associated protein inhibitor using molecular docking techniques. Higher basis set 6-311++G(d,p) is used to attain results more concurrent to the experimental data. The results of the organic amine 2, 4 Dibromoaniline is analyzed and discussed.

  19. Evaluation of the procedure 1A component of the 1980 US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Chapman, G. M. (Principal Investigator); Carnes, J. G.

    1981-01-01

    Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.

  20. From Data to Images:. a Shape Based Approach for Fluorescence Tomography

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Prieto, K. E.

    2012-12-01

    Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.

  1. Anther Culture in Pepper (Capsicum annuum L.).

    PubMed

    Parra-Vega, Verónica; Seguí-Simarro, Jose M

    2016-01-01

    Anther culture is the most popular of the techniques used to induce microspore embryogenesis. This technique is well set up in a wide range of crops, including pepper. In this chapter, a protocol for anther culture in pepper is described. The protocol presented hereby includes the steps from the selection of buds from donor plants to the regeneration and acclimatization of doubled haploid plants derived from the embryos, as well as a description of how to analyze the ploidy level of the regenerated plants.

  2. Privacy Protection by Matrix Transformation

    NASA Astrophysics Data System (ADS)

    Yang, Weijia

    Privacy preserving is indispensable in data mining. In this paper, we present a novel clustering method for distributed multi-party data sets using orthogonal transformation and data randomization techniques. Our method can not only protect privacy in face of collusion, but also achieve a higher level of accuracy compared to the existing methods.

  3. Rule-governed Approaches to Physics--Newton's Third Law.

    ERIC Educational Resources Information Center

    Maloney, David P.

    1984-01-01

    Describes an approach to assessing the use of rules in solving problems related to Newton's third law of motion. Discusses the problems used, method of questioning, scoring of problem sets, and a general overview of the use of the technique in aiding the teacher in dealing with student's conceptual levels. (JM)

  4. Calculus Students' and Instructors' Conceptualizations of Slope: A Comparison across Academic Levels

    ERIC Educational Resources Information Center

    Nagle, Courtney; Moore-Russo, Deborah; Viglietti, Janine; Martin, Kristi

    2013-01-01

    This study considers tertiary calculus students' and instructors' conceptualizations of slope. Qualitative techniques were employed to classify responses to 5 items using conceptualizations of slope identified across various research settings. Students' responses suggest that they rely on procedurally based conceptualizations of…

  5. The National Health Educator Job Analysis 2010: Process and Outcomes

    ERIC Educational Resources Information Center

    Doyle, Eva I.; Caro, Carla M.; Lysoby, Linda; Auld, M. Elaine; Smith, Becky J.; Muenzen, Patricia M.

    2012-01-01

    The National Health Educator Job Analysis 2010 was conducted to update the competencies model for entry- and advanced-level health educators. Qualitative and quantitative methods were used. Structured interviews, focus groups, and a modified Delphi technique were implemented to engage 59 health educators from diverse work settings and experience…

  6. A study of interior noise levels, noise sources and transmission paths in light aircraft

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.; Murray, B. S.; Theobald, M. A.

    1983-01-01

    The interior noise levels and spectral characteristics of 18 single-and twin-engine propeller-driven light aircraft, and source-path diagnosis of a single-engine aircraft which was considered representative of a large part of the fleet were studied. The purpose of the flight surveys was to measure internal noise levels and identify principal noise sources and paths under a carefully controlled and standardized set of flight procedures. The diagnostic tests consisted of flights and ground tests in which various parts of the aircraft, such as engine mounts, the engine compartment, exhaust pipe, individual panels, and the wing strut were instrumented to determine source levels and transmission path strengths using the transfer function technique. Predominant source and path combinations are identified. Experimental techniques are described. Data, transfer function calculations to derive source-path contributions to the cabin acoustic environment, and implications of the findings for noise control design are analyzed.

  7. Redesign of a Variable-Gain Output Feedback Longitudinal Controller Flown on the High-Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1998-01-01

    This paper describes a redesigned longitudinal controller that flew on the High-Alpha Research Vehicle (HARV) during calendar years (CY) 1995 and 1996. Linear models are developed for both the modified controller and a baseline controller that was flown in CY 1994. The modified controller was developed with three gain sets for flight evaluation, and several linear analysis results are shown comparing the gain sets. A Neal-Smith flying qualities analysis shows that performance for the low- and medium-gain sets is near the level 1 boundary, depending upon the bandwidth assumed, whereas the high-gain set indicates a sensitivity problem. A newly developed high-alpha Bode envelope criterion indicates that the control system gains may be slightly high, even for the low-gain set. A large motion-base simulator in the United Kingdom was used to evaluate the various controllers. Desired performance, which appeared to be satisfactory for flight, was generally met with both the low- and medium-gain sets. Both the high-gain set and the baseline controller were very sensitive, and it was easy to generate pilot-induced oscillation (PIO) in some of the target-tracking maneuvers. Flight target-tracking results varied from level 1 to level 3 and from no sensitivity to PIO. These results were related to pilot technique and whether actuator rate saturation was encountered.

  8. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardisty, M.; Gordon, L.; Agarwal, P.

    2007-08-15

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less

  9. Topology optimization in acoustics and elasto-acoustics via a level-set method

    NASA Astrophysics Data System (ADS)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  10. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Bioinformatics/biostatistics: microarray analysis.

    PubMed

    Eichler, Gabriel S

    2012-01-01

    The quantity and complexity of the molecular-level data generated in both research and clinical settings require the use of sophisticated, powerful computational interpretation techniques. It is for this reason that bioinformatic analysis of complex molecular profiling data has become a fundamental technology in the development of personalized medicine. This chapter provides a high-level overview of the field of bioinformatics and outlines several, classic bioinformatic approaches. The highlighted approaches can be aptly applied to nearly any sort of high-dimensional genomic, proteomic, or metabolomic experiments. Reviewed technologies in this chapter include traditional clustering analysis, the Gene Expression Dynamics Inspector (GEDI), GoMiner (GoMiner), Gene Set Enrichment Analysis (GSEA), and the Learner of Functional Enrichment (LeFE).

  12. A pilot study of solar water disinfection in the wilderness setting.

    PubMed

    Tedeschi, Christopher M; Barsi, Christopher; Peterson, Shane E; Carey, Kevin M

    2014-09-01

    Solar disinfection of water has been shown to be an effective treatment method in the developing world, but not specifically in a wilderness or survival setting. The current study sought to evaluate the technique using materials typically available in a wilderness or backcountry environment. Untreated surface water from a stream in rural Costa Rica was disinfected using the solar disinfection (SODIS) method, using both standard containers as well as containers and materials more readily available to a wilderness traveler. Posttreatment samples using polyethylene terephthalate (PET) bottles, as well as Nalgene and Platypus water containers, showed similarly decreased levels of Escherichia coli and total coliforms. The SODIS technique may be applicable in the wilderness setting using tools commonly available in the backcountry. In this limited trial, specific types of containers common in wilderness settings demonstrated similar performance to the standard containers. With further study, solar disinfection in appropriate conditions may be included as a viable treatment option for wilderness water disinfection. Copyright © 2014 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  13. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.

  14. Comparison of the resulting error in data fusion techniques when used with remote sensing, earth observation, and in-situ data sets for water quality applications

    NASA Astrophysics Data System (ADS)

    Ziemba, Alexander; El Serafy, Ghada

    2016-04-01

    Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.

  15. Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets.

    PubMed

    Shuryak, Igor

    2017-01-01

    The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected "signal"; (5) using several machine learning methods to test the "signal's" sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation.

  16. Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets

    PubMed Central

    Shuryak, Igor

    2017-01-01

    The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected “signal”; (5) using several machine learning methods to test the “signal’s” sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation. PMID:28068401

  17. Poster - Thur Eve - 57: Craniospinal irradiation with jagged-junction IMRT approach without beam edge matching for field junctions.

    PubMed

    Cao, F; Ramaseshan, R; Corns, R; Harrop, S; Nuraney, N; Steiner, P; Aldridge, S; Liu, M; Carolan, H; Agranovich, A; Karva, A

    2012-07-01

    Craniospinal irradiation were traditionally treated the central nervous system using two or three adjacent field sets. A intensity-modulated radiotherapy (IMRT) plan (Jagged-Junction IMRT) which overcomes problems associated with field junctions and beam edge matching, improves planning and treatment setup efficiencies with homogenous target dose distribution was developed. Jagged-Junction IMRT was retrospectively planned on three patients with prescription of 36 Gy in 20 fractions and compared to conventional treatment plans. Planning target volume (PTV) included the whole brain and spinal canal to the S3 vertebral level. The plan employed three field sets, each with a unique isocentre. One field set with seven fields treated the cranium. Two field sets treated the spine, each set using three fields. Fields from adjacent sets were overlapped and the optimization process smoothly integrated the dose inside the overlapped junction. For the Jagged-Junction IMRT plans vs conventional technique, average homogeneity index equaled 0.08±0.01 vs 0.12±0.02, and conformity number equaled 0.79±0.01 vs 0.47±0.12. The 95% isodose surface covered (99.5±0.3)% of the PTV vs (98.1±2.0)%. Both Jagged-Junction IMRT plans and the conventional plans had good sparing of the organs at risk. Jagged-Junction IMRT planning provided good dose homogeneity and conformity to the target while maintaining a low dose to the organs at risk. Jagged-Junction IMRT optimization smoothly distributed dose in the junction between field sets. Since there was no beam matching, this treatment technique is less likely to produce hot or cold spots at the junction in contrast to conventional techniques. © 2012 American Association of Physicists in Medicine.

  18. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  19. Dried blood spot measurement of pregnancy-associated plasma protein A (PAPP-A) and free β-subunit of human chorionic gonadotropin (β-hCG) from a low-resource setting.

    PubMed

    Browne, J L; Schielen, P C J I; Belmouden, I; Pennings, J L A; Klipstein-Grobusch, K

    2015-06-01

    The objectives of the article is to compare pregnancy-associated plasma protein A (PAPP-A) and free β-subunit of human chorionic gonadotropin (β-hCG) concentrations in dried blood spots (DBSs) with serum of samples obtained from a public hospital in a low-resource setting and to evaluate their stability. Serum and DBS samples were obtained by venipuncture and finger prick from 50 pregnant participants in a cohort study in a public hospital in Accra, Ghana. PAPP-A and β-hCG concentrations from serum and DBS were measured with an AutoDELFIA® (PerkinElmer, PerkinElmer, Turku, Finland) automatic immunoassay. Correlation and Passing-Bablok regression analyses were performed to compare marker levels. High correlation (>0.9) was observed for PAPP-A and β-hCG levels between various sampling techniques. The β-hCG concentration was stable between DBS and serum, PAPP-A concentration consistently lower in DBS. Our findings suggest that β-hCG can be reliably collected from DBS in low-resource tropical settings. The exact conditions of the clinical workflow necessary for reliable PAPP-A measurement in these settings need to be further developed in the future. These findings could have implications for prenatal screening programs feasibility in low-income and middle-income countries, as DBS provides an alternative minimally invasive sampling method, with advantages in sampling technique, stability, logistics, and potential application in low-resource settings. © 2015 John Wiley & Sons, Ltd.

  20. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  1. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  2. Extraction of topography from side-looking satellite systems - A case study with SPOT simulation data

    NASA Technical Reports Server (NTRS)

    Ungar, Stephen G.; Merry, Carolyn J.; Mckim, Harlan L.; Irish, Richard; Miller, Michael S.

    1988-01-01

    A simulated data set was used to evaluate techniques for extracting topography from side-looking satellite systems for an area of northwest Washington state. A negative transparency orthophotoquad was digitized at a spacing of 85 microns, resulting in an equivalent ground distance of 9.86 m between pixels and a radiometric resolution of 256 levels. A bilinear interpolation was performed on digital elevation model data to generate elevation data at a 9.86-m resolution. The nominal orbital characteristics and geometry of the SPOT satellite were convoluted with the data to produce simulated panchromatic HRV digital stereo imagery for three different orbital paths and techniques for reconstructing topographic data were developed. Analyses with the simulated HRV data and other data sets show that the method is effective.

  3. Land use/land cover mapping (1:25000) of Taiwan, Republic of China by automated multispectral interpretation of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Sung, Q. C.; Miller, L. D.

    1977-01-01

    Three methods were tested for collection of the training sets needed to establish the spectral signatures of the land uses/land covers sought due to the difficulties of retrospective collection of representative ground control data. Computer preprocessing techniques applied to the digital images to improve the final classification results were geometric corrections, spectral band or image ratioing and statistical cleaning of the representative training sets. A minimal level of statistical verification was made based upon the comparisons between the airphoto estimates and the classification results. The verifications provided a further support to the selection of MSS band 5 and 7. It also indicated that the maximum likelihood ratioing technique can achieve more agreeable classification results with the airphoto estimates than the stepwise discriminant analysis.

  4. Automated synthesis and composition of taskblocks for control of manufacturing systems.

    PubMed

    Holloway, L E; Guan, X; Sundaravadivelu, R; Ashley, J R

    2000-01-01

    Automated control synthesis methods for discrete-event systems promise to reduce the time required to develop, debug, and modify control software. Such methods must be able to translate high-level control goals into detailed sequences of actuation and sensing signals. In this paper, we present such a technique. It relies on analysis of a system model, defined as a set of interacting components, each represented as a form of condition system Petri net. Control logic modules, called taskblocks, are synthesized from these individual models. These then interact hierarchically and sequentially to drive the system through specified control goals. The resulting controller is automatically converted to executable control code. The paper concludes with a discussion of a set of software tools developed to demonstrate the techniques on a small manufacturing system.

  5. Quantum Sets and Clifford Algebras

    NASA Astrophysics Data System (ADS)

    Finkelstein, David

    1982-06-01

    The mathematical language presently used for quantum physics is a high-level language. As a lowest-level or basic language I construct a quantum set theory in three stages: (1) Classical set theory, formulated as a Clifford algebra of “ S numbers” generated by a single monadic operation, “bracing,” Br = {…}. (2) Indefinite set theory, a modification of set theory dealing with the modal logical concept of possibility. (3) Quantum set theory. The quantum set is constructed from the null set by the familiar quantum techniques of tensor product and antisymmetrization. There are both a Clifford and a Grassmann algebra with sets as basis elements. Rank and cardinality operators are analogous to Schroedinger coordinates of the theory, in that they are multiplication or “ Q-type” operators. “ P-type” operators analogous to Schroedinger momenta, in that they transform the Q-type quantities, are bracing (Br), Clifford multiplication by a set X, and the creator of X, represented by Grassmann multiplication c( X) by the set X. Br and its adjoint Br* form a Bose-Einstein canonical pair, and c( X) and its adjoint c( X)* form a Fermi-Dirac or anticanonical pair. Many coefficient number systems can be employed in this quantization. I use the integers for a discrete quantum theory, with the usual complex quantum theory as limit. Quantum set theory may be applied to a quantum time space and a quantum automaton.

  6. Iterative Reconstruction Techniques in Abdominopelvic CT: Technical Concepts and Clinical Implementation.

    PubMed

    Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V

    2015-07-01

    This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.

  7. Automatic Rooftop Extraction in Stereo Imagery Using Distance and Building Shape Regularized Level Set Evolution

    NASA Astrophysics Data System (ADS)

    Tian, J.; Krauß, T.; d'Angelo, P.

    2017-05-01

    Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.

  8. Combination Base64 Algorithm and EOF Technique for Steganography

    NASA Astrophysics Data System (ADS)

    Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Saleh Ahmar, Ansari; Siregar, Dodi; Putera Utama Siahaan, Andysah; Faisal, Ilham; Rahman, Sayuti; Suita, Diana; Zamsuri, Ahmad; Abdullah, Dahlan; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Sriadhi, S.

    2018-04-01

    The steganography process combines mathematics and computer science. Steganography consists of a set of methods and techniques to embed the data into another media so that the contents are unreadable to anyone who does not have the authority to read these data. The main objective of the use of base64 method is to convert any file in order to achieve privacy. This paper discusses a steganography and encoding method using base64, which is a set of encoding schemes that convert the same binary data to the form of a series of ASCII code. Also, the EoF technique is used to embed encoding text performed by Base64. As an example, for the mechanisms a file is used to represent the texts, and by using the two methods together will increase the security level for protecting the data, this research aims to secure many types of files in a particular media with a good security and not to damage the stored files and coverage media that used.

  9. Perchlorate as an emerging contaminant in soil, water and food.

    PubMed

    Kumarathilaka, Prasanna; Oze, Christopher; Indraratne, S P; Vithanage, Meththika

    2016-05-01

    Perchlorate ( [Formula: see text] ) is a strong oxidizer and has gained significant attention due to its reactivity, occurrence, and persistence in surface water, groundwater, soil and food. Stable isotope techniques (i.e., ((18)O/(16)O and (17)O/(16)O) and (37)Cl/(35)Cl) facilitate the differentiation of naturally occurring perchlorate from anthropogenic perchlorate. At high enough concentrations, perchlorate can inhibit proper function of the thyroid gland. Dietary reference dose (RfD) for perchlorate exposure from both food and water is set at 0.7 μg kg(-1) body weight/day which translates to a drinking water level of 24.5 μg L(-1). Chromatographic techniques (i.e., ion chromatography and liquid chromatography mass spectrometry) can be successfully used to detect trace level of perchlorate in environmental samples. Perchlorate can be effectively removed by wide variety of remediation techniques such as bio-reduction, chemical reduction, adsorption, membrane filtration, ion exchange and electro-reduction. Bio-reduction is appropriate for large scale treatment plants whereas ion exchange is suitable for removing trace level of perchlorate in aqueous medium. The environmental occurrence of perchlorate, toxicity, analytical techniques, removal technologies are presented. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    2005-01-01

    Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.

  11. The artificial and natural isotopes distribution in sedge (Carex L.) biomass from the Yenisei River flood-plain: Adaptation of the sequential elution technique.

    PubMed

    Kropacheva, Marya; Melgunov, Mikhail; Makarova, Irina

    2017-02-01

    The study of migration pathways of artificial isotopes in the flood-plain biogeocoenoses, impacted by the nuclear fuel cycle plants, requires determination of isotope speciations in the biomass of higher terrestrial plants. The optimal method for their determination is the sequential elution technique (SET). The technique was originally developed to study atmospheric pollution by metals and has been applied to lichens, terrestrial and aquatic bryophytes. Due to morphological and physiological differences, it was necessary to adapt SET for new objects: coastal macrophytes growing on the banks of the Yenisei flood-plain islands in the near impact zone of Krasnoyarsk Mining and Chemical Combine (KMCC). In the first version of SET, 20 mM Na 2 EDTA was used as a reagent at the first stage; in the second version of SET, it was 1 M CH 3 COONH 4 . Four fractions were extracted. Fraction I included elements from the intercellular space and those connected with the outer side of the cell wall. Fraction II contained intracellular elements; fraction III contained elements firmly bound in the cell wall and associated structures; fraction IV contained insoluble residue. Adaptation of SET has shown that the first stage should be performed immediately after sampling. Separation of fractions III and IV can be neglected, since the output of isotopes into the IV fraction is at the level of error detection. The most adequate version of SET for terrestrial vascular plants is the version using 20 mM Na 2 EDTA at the first stage. Isotope 90 Sr is most sensitive to the technique changes. Its distribution depends strongly on both the extractant used at stage 1 and duration of the first stage. Distribution of artificial radionuclides in the biomass of terrestrial vascular plants can vary from year to year and depends significantly on the age of the plant. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Towards a Framework for Change Detection in Data Sets

    NASA Astrophysics Data System (ADS)

    Böttcher, Mirko; Nauck, Detlef; Ruta, Dymitr; Spott, Martin

    Since the world with its markets, innovations and customers is changing faster than ever before, the key to survival for businesses is the ability to detect, assess and respond to changing conditions rapidly and intelligently. Discovering changes and reacting to or acting upon them before others do has therefore become a strategical issue for many companies. However, existing data analysis techniques are insufflent for this task since they typically assume that the domain under consideration is stable over time. This paper presents a framework that detects changes within a data set at virtually any level of granularity. The underlying idea is to derive a rule-based description of the data set at different points in time and to subsequently analyse how these rules change. Nevertheless, further techniques are required to assist the data analyst in interpreting and assessing their changes. Therefore the framework also contains methods to discard rules that are non-drivers for change and to assess the interestingness of detected changes.

  13. Information technology and hospice palliative care: social, cultural, ethical and technical implications in a rural setting.

    PubMed

    Kuziemsky, Craig; Jewers, Heather; Appleby, Brenda; Foshay, Neil; Maccaull, Wendy; Miller, Keith; Macdonald, Madonna

    2012-01-01

    There is a need to better understand the specific settings in which health information technology (HIT) is used and implemented. Factors that will determine the successful implementation of HIT are context-specific and often reside not at the technical level but rather at the process and people level. This paper provides the results of a needs assessment for HIT to support hospice palliative care (HPC) delivery in rural settings. Roundtable discussions using the nominal group technique were done to identify priority issues regarding HIT usage to support rural HPC delivery. Qualitative content analysis was then used to identify sociotechnical themes from the roundtable data. Twenty priority issues were identified at the roundtable session. Content analysis grouped the priority issues into one central theme and five supporting themes to form a sociotechnical framework for patient-centered care in rural settings. There are several sociotechnical themes and associated issues that need to be considered prior to implementing HIT in rural HPC settings. Proactive evaluation of these issues can enhance HIT implementation and also help to make ethical aspects of HIT design more explicit.

  14. Ant colony optimisation-direct cover: a hybrid ant colony direct cover technique for multi-level synthesis of multiple-valued logic functions

    NASA Astrophysics Data System (ADS)

    Abd-El-Barr, Mostafa

    2010-12-01

    The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.

  15. Essential uncontrollability of discrete linear, time-invariant, dynamical systems

    NASA Technical Reports Server (NTRS)

    Cliff, E. M.

    1975-01-01

    The concept of a 'best approximating m-dimensional subspace' for a given set of vectors in n-dimensional whole space is introduced. Such a subspace is easily described in terms of the eigenvectors of an associated Gram matrix. This technique is used to approximate an achievable set for a discrete linear time-invariant dynamical system. This approximation characterizes the part of the state space that may be reached using modest levels of control. If the achievable set can be closely approximated by a proper subspace of the whole space then the system is 'essentially uncontrollable'. The notion finds application in studies of failure-tolerant systems, and in decoupling.

  16. Promoting Golf as a Lifetime Physical Activity for Persons with Disabilities

    ERIC Educational Resources Information Center

    Sandt, Dawn D.; Flynn, Erin; Turner, Tiffany A.

    2014-01-01

    Golf is one of the most accessible and versatile physical activities and is a viable choice for young adults with disabilities to engage in the recommended levels of physical activity. Teaching golf to youth with disabilities requires more than making accommodations regarding equipment, technique, and rules in the physical education setting. For…

  17. Competency Based Curriculum. Revised Delivery Systems for Culinary Arts Program. Project Report.

    ERIC Educational Resources Information Center

    Spokane Community Coll., WA.

    Developed through a grant that enabled faculty members to work together to define goals and set objectives, this curriculum guide contains course objectives for the culinary arts program at Spokane Community College in Washington. Objectives are provided for the following courses: culinary techniques and skill development (two levels),…

  18. Comparing Cognitive Interviewing and Online Probing: Do They Find Similar Results?

    ERIC Educational Resources Information Center

    Meitinger, Katharina; Behr, Dorothée

    2016-01-01

    This study compares the application of probing techniques in cognitive interviewing (CI) and online probing (OP). Even though the probing is similar, the methods differ regarding typical mode setting, sample size, level of interactivity, and goals. We analyzed probing answers to the International Social Survey Programme item battery on specific…

  19. Clinical Report: Helping Habitual Smokers Using Flooding and Hypnotic Desensitization Technique.

    ERIC Educational Resources Information Center

    Powell, Douglas H.

    Most research in smoking cessation has shown no intervention clearly superior or successful. Of those who return to smoking after abstaining, a subgroup includes those who do so incrementally, eventually reaching their former level. An approach aimed at this subgroup, originally used in a group setting, involves intensifying the desire to smoke…

  20. Educating the Educator: Use of Pulse Oximetry in Athletic Training

    ERIC Educational Resources Information Center

    Berry, David C.; Seitz, S. Robert

    2012-01-01

    The 5th edition of the "Athletic Training Education Competencies" expanded the scope of knowledge and skill set of entry-level athletic trainers related to the domain of "Acute Care of Injuries and Illnesses." One of these major changes includes the introduction of adjunct airway techniques, such as oropharyngeal and nasopharyngeal airways and…

  1. Costing Principles in Higher Education and Their Application (First Revision).

    ERIC Educational Resources Information Center

    Sterns, A. A.

    This document provides a reason for applying known cost-accounting methodology within the realm of higher education and attempts to make the known techniques viable for sets of objectives within the university environment. The plan developed here is applied to a department, the lowest level in the university hierarchy, and demonstrates costs in…

  2. Goals, Success Factors, and Barriers for Simulation-Based Learning: A Qualitative Interview Study in Health Care

    ERIC Educational Resources Information Center

    Dieckmann, Peter; Friis, Susanne Molin; Lippert, Anne; Ostergaard, Doris

    2012-01-01

    Introduction: This study describes (a) process goals, (b) success factors, and (c) barriers for optimizing simulation-based learning environments within the simulation setting model developed by Dieckmann. Methods: Seven simulation educators of different experience levels were interviewed using the Critical Incident Technique. Results: (a) The…

  3. Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

    PubMed Central

    Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.

    2012-01-01

    Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460

  4. Social Context of First Birth Timing in a Rapidly Changing Rural Setting

    PubMed Central

    Ghimire, Dirgha J.

    2016-01-01

    This article examines the influence of social context on the rate of first birth. Drawing on socialization models, I develop a theoretical framework to explain how different aspects of social context (i.e., neighbors), may affect the rate of first birth. Neighbors, who in the study setting comprise individuals’ immediate social context, have an important influence on the rate of first birth. To test my hypotheses, I leverage a setting, measures and analytical techniques designed to study the impact of macro-level social contexts on micro-level individual behavior. The results show that neighbors’ age at first birth, travel to the capital city and media exposure tend to reduce the first birth rate, while neighbors’ non-family work experience increases first birth rate. These effects are independent of neighborhood characteristics and are robust against several key variations in model specifications. PMID:27886737

  5. Understanding the Role of Context in the Interpretation of Complex Battlespace Intelligence

    DTIC Science & Technology

    2006-01-01

    Level 2 Fusion)" that there remains a significant need for higher levels of information fusion such as those required for generic situation awareness ... information in a set of reports, 2) general background knowledge e.g., doctrine, techniques, practices) plus 4) known situation-specific information (e.g... aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information

  6. C-MOS array design techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1978-01-01

    The entire complement of standard cells and components, except for the set-reset flip-flop, was completed. Two levels of checking were performed on each device. Logic cells and topological layout are described. All the related computer programs were coded and one level of debugging was completed. The logic for the test chip was modified and updated. This test chip served as the first test vehicle to exercise the standard cell complementary MOS(C-MOS) automatic artwork generation capability.

  7. Enhancing instruction scheduling with a block-structured ISA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melvin, S.; Patt, Y.

    It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less

  8. Quiet engine program: Turbine noise suppression. -Volume 1: General treatment evaluation and measurement techniques

    NASA Technical Reports Server (NTRS)

    Clemons, A.; Hehmann, H.; Radecki, K.

    1973-01-01

    Acoustic treatment was developed for jet engine turbine noise suppression. Acoustic impedance and duct transmission loss measurements were made for various suppression systems. An environmental compatibility study on several material types having suppression characteristics is presented. Two sets of engine hardware were designed and are described along with engine test results which include probe, farfield, near field, and acoustic directional array data. Comparisons of the expected and the measured suppression levels are given as well as a discussion of test results and design techniques.

  9. Aqueous Humor Dynamics of the Brown-Norway Rat

    PubMed Central

    Ficarrotta, Kayla R.; Bello, Simon A.; Mohamed, Youssef H.; Passaglia, Christopher L.

    2018-01-01

    Purpose The study aimed to provide a quantitative description of aqueous humor dynamics in healthy rat eyes. Methods One eye of 26 anesthetized adult Brown-Norway rats was cannulated with a needle connected to a perfusion pump and pressure transducer. Pressure-flow data were measured in live and dead eyes by varying pump rate (constant-flow technique) or by modulating pump duty cycle to hold intraocular pressure (IOP) at set levels (modified constant-pressure technique). Data were fit by the Goldmann equation to estimate conventional outflow facility (\\begin{document}\

  10. Dutch Young Adults Ratings of Behavior Change Techniques Applied in Mobile Phone Apps to Promote Physical Activity: A Cross-Sectional Survey

    PubMed Central

    Belmon, Laura S; te Velde, Saskia J; Brug, Johannes

    2015-01-01

    Background Interventions delivered through new device technology, including mobile phone apps, appear to be an effective method to reach young adults. Previous research indicates that self-efficacy and social support for physical activity and self-regulation behavior change techniques (BCT), such as goal setting, feedback, and self-monitoring, are important for promoting physical activity; however, little is known about evaluations by the target population of BCTs applied to physical activity apps and whether these preferences are associated with individual personality characteristics. Objective This study aimed to explore young adults’ opinions regarding BCTs (including self-regulation techniques) applied in mobile phone physical activity apps, and to examine associations between personality characteristics and ratings of BCTs applied in physical activity apps. Methods We conducted a cross-sectional online survey among healthy 18 to 30-year-old adults (N=179). Data on participants’ gender, age, height, weight, current education level, living situation, mobile phone use, personality traits, exercise self-efficacy, exercise self-identity, total physical activity level, and whether participants met Dutch physical activity guidelines were collected. Items for rating BCTs applied in physical activity apps were selected from a hierarchical taxonomy for BCTs, and were clustered into three BCT categories according to factor analysis: “goal setting and goal reviewing,” “feedback and self-monitoring,” and “social support and social comparison.” Results Most participants were female (n=146), highly educated (n=169), physically active, and had high levels of self-efficacy. In general, we observed high ratings of BCTs aimed to increase “goal setting and goal reviewing” and “feedback and self-monitoring,” but not for BCTs addressing “social support and social comparison.” Only 3 (out of 16 tested) significant associations between personality characteristics and BCTs were observed: “agreeableness” was related to more positive ratings of BCTs addressing “goal setting and goal reviewing” (OR 1.61, 95% CI 1.06-2.41), “neuroticism” was related to BCTs addressing “feedback and self-monitoring” (OR 0.76, 95% CI 0.58-1.00), and “exercise self-efficacy” was related to a high rating of BCTs addressing “feedback and self-monitoring” (OR 1.06, 95% CI 1.02-1.11). No associations were observed between personality characteristics (ie, personality, exercise self-efficacy, exercise self-identity) and participants’ ratings of BCTs addressing “social support and social comparison.” Conclusions Young Dutch physically active adults rate self-regulation techniques as most positive and techniques addressing social support as less positive among mobile phone apps that aim to promote physical activity. Such ratings of BCTs differ according to personality traits and exercise self-efficacy. Future research should focus on which behavior change techniques in app-based interventions are most effective to increase physical activity. PMID:26563744

  11. Neural modeling and functional neuroimaging.

    PubMed

    Horwitz, B; Sporns, O

    1994-01-01

    Two research areas that so far have had little interaction with one another are functional neuroimaging and computational neuroscience. The application of computational models and techniques to the inherently rich data sets generated by "standard" neurophysiological methods has proven useful for interpreting these data sets and for providing predictions and hypotheses for further experiments. We suggest that both theory- and data-driven computational modeling of neuronal systems can help to interpret data generated by functional neuroimaging methods, especially those used with human subjects. In this article, we point out four sets of questions, addressable by computational neuroscientists whose answere would be of value and interest to those who perform functional neuroimaging. The first set consist of determining the neurobiological substrate of the signals measured by functional neuroimaging. The second set concerns developing systems-level models of functional neuroimaging data. The third set of questions involves integrating functional neuroimaging data across modalities, with a particular emphasis on relating electromagnetic with hemodynamic data. The last set asks how one can relate systems-level models to those at the neuronal and neural ensemble levels. We feel that there are ample reasons to link functional neuroimaging and neural modeling, and that combining the results from the two disciplines will result in furthering our understanding of the central nervous system. © 1994 Wiley-Liss, Inc. This Article is a US Goverment work and, as such, is in the public domain in the United State of America. Copyright © 1994 Wiley-Liss, Inc.

  12. Factor weighting in DRASTIC modeling.

    PubMed

    Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F

    2015-02-01

    Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional expertise to be set up satisfactorily. Following a general criterion that weights must be proportional to the range of the ratings, Correspondence Analysis may be recommended as the best adjustment technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Efficient hyperspectral image segmentation using geometric active contour formulation

    NASA Astrophysics Data System (ADS)

    Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.

    2014-10-01

    In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.

  14. Microwave assisted preparation of magnesium phosphate cement (MPC) for orthopedic applications: a novel solution to the exothermicity problem.

    PubMed

    Zhou, Huan; Agarwal, Anand K; Goel, Vijay K; Bhaduri, Sarit B

    2013-10-01

    There are two interesting features of this paper. First, we report herein a novel microwave assisted technique to prepare phosphate based orthopedic cements, which do not generate any exothermicity during setting. The exothermic reactions during the setting of phosphate cements can cause tissue damage during the administration of injectable compositions and hence a solution to the problem is sought via microwave processing. This solution through microwave exposure is based on a phenomenon that microwave irradiation can remove all water molecules from the alkaline earth phosphate cement paste to temporarily stop the setting reaction while preserving the active precursor phase in the formulation. The setting reaction can be initiated a second time by adding aqueous medium, but without any exothermicity. Second, a special emphasis is placed on using this technique to synthesize magnesium phosphate cements for orthopedic applications with their enhanced mechanical properties and possible uses as drug and protein delivery vehicles. The as-synthesized cements were evaluated for the occurrences of exothermic reactions, setting times, presence of Mg-phosphate phases, compressive strength levels, microstructural features before and after soaking in (simulated body fluid) SBF, and in vitro cytocompatibility responses. The major results show that exposure to microwaves solves the exothermicity problem, while simultaneously improving the mechanical performance of hardened cements and reducing the setting times. As expected, the cements are also found to be cytocompatible. Finally, it is observed that this process can be applied to calcium phosphate cements system (CPCs) as well. Based on the results, this microwave exposure provides a novel technique for the processing of injectable phosphate bone cement compositions. © 2013.

  15. Estimation of cocaine consumption in the community: a critical comparison of the results from three complimentary techniques

    PubMed Central

    Reid, Malcolm J; Langford, Katherine H; Grung, Merete; Gjerde, Hallvard; Amundsen, Ellen J; Morland, Jorg; Thomas, Kevin V

    2012-01-01

    Objectives A range of approaches are now available to estimate the level of drug use in the community so it is desirable to critically compare results from the differing techniques. This paper presents a comparison of the results from three methods for estimating the level of cocaine use in the general population. Design The comparison applies to; a set of regional-scale sample survey questionnaires, a representative sample survey on drug use among drivers and an analysis of the quantity of cocaine-related metabolites in sewage. Setting 14 438 participants provided data for the set of regional-scale sample survey questionnaires; 2341 drivers provided oral-fluid samples and untreated sewage from 570 000 people was analysed for biomarkers of cocaine use. All data were collected in Oslo, Norway. Results 0.70 (0.36–1.03) % of drivers tested positive for cocaine use which suggest a prevalence that is higher than the 0.22 (0.13–0.30) % (per day) figure derived from regional-scale survey questionnaires, but the degree to which cocaine consumption in the driver population follows the general population is an unanswered question. Despite the comparatively low-prevalence figure the survey questionnaires did provide estimates of the volume of consumption that are comparable with the amount of cocaine-related metabolites in sewage. Per-user consumption estimates are however highlighted as a significant source of uncertainty as little or no data on the quantities consumed by individuals are available, and much of the existing data are contradictory. Conclusions The comparison carried out in the present study can provide an excellent means of checking the quality and accuracy of the three measurement techniques because they each approach the problem from a different viewpoint. Together the three complimentary techniques provide a well-balanced assessment of the drug-use situation in a given community and identify areas where more research is needed. PMID:23144259

  16. Levels at gaging stations

    USGS Publications Warehouse

    Kenney, Terry A.

    2010-01-01

    Operational procedures at U.S. Geological Survey gaging stations include periodic leveling checks to ensure that gages are accurately set to the established gage datum. Differential leveling techniques are used to determine elevations for reference marks, reference points, all gages, and the water surface. The techniques presented in this manual provide guidance on instruments and methods that ensure gaging-station levels are run to both a high precision and accuracy. Levels are run at gaging stations whenever differences in gage readings are unresolved, stations may have been damaged, or according to a pre-determined frequency. Engineer's levels, both optical levels and electronic digital levels, are commonly used for gaging-station levels. Collimation tests should be run at least once a week for any week that levels are run, and the absolute value of the collimation error cannot exceed 0.003 foot/100 feet (ft). An acceptable set of gaging-station levels consists of a minimum of two foresights, each from a different instrument height, taken on at least two independent reference marks, all reference points, all gages, and the water surface. The initial instrument height is determined from another independent reference mark, known as the origin, or base reference mark. The absolute value of the closure error of a leveling circuit must be less than or equal to ft, where n is the total number of instrument setups, and may not exceed |0.015| ft regardless of the number of instrument setups. Closure error for a leveling circuit is distributed by instrument setup and adjusted elevations are determined. Side shots in a level circuit are assessed by examining the differences between the adjusted first and second elevations for each objective point in the circuit. The absolute value of these differences must be less than or equal to 0.005 ft. Final elevations for objective points are determined by averaging the valid adjusted first and second elevations. If final elevations indicate that the reference gage is off by |0.015| ft or more, it must be reset.

  17. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  18. A Comprehensive Program for Measurement of Military Aircraft Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Mengdawn

    2009-11-01

    Emissions of gases and particulate matter by military aircraft were characterized inplume by 'extractive' and 'optical remote-sensing (ORS)' technologies. Non-volatile particle size distribution, number and mass concentrations were measured with good precision and reproducibly. Time-integrated particulate filter samples were collected and analyzed for smoke number, elemental composition, carbon contents, and sulfate. Observed at EEP the geometric mean diameter (as measured by the mobility diameter) generally increased as the engine power setting increased, which is consistent with downstream observations. The modal diameters at the downstream locations are larger than that at EEP at the same engine power level. The results indicatemore » that engine particles were processed by condensation, for example, leading to particle growth in-plume. Elemental analysis indicated little metals were present in the exhaust, while most of the exhaust materials in the particulate phase were carbon and sulfate (in the JP-8 fuel). CO, CO{sub 2}, NO, NO{sub 2}, SO{sub 2}, HCHO, ethylene, acetylene, propylene, and alkanes were measured. The last five species were most noticeable under engine idle condition. The levels of hydrocarbons emitted at high engine power level were generally below the detection limits. ORS techniques yielded real-time gaseous measurement, but the same techniques could not be extended directly to ultrafine particles found in all engine exhausts. The results validated sampling methodology and measurement techniques used for non-volatile particulate aircraft emissions, which also highlighted the needs for further research on sampling and measurement for volatile particulate matter and semi-volatile species in the engine exhaust especially at the low engine power setting.« less

  19. A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia C.

    2011-01-01

    In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.

  20. Multi-Frequency Harmonics Technique for HIFU Tissue Treatment

    NASA Astrophysics Data System (ADS)

    Rybyanets, Andrey N.; Lugovaya, Maria A.; Rybyanets, Anastasia A.

    2010-03-01

    New technique for enhancing of tissue lysis and enlarging treatment volume during one HIFU sonification is proposed. The technique consists in simultaneous or alternative (at optimal repetition frequency) excitation of single element HIFU transducer on a frequencies corresponding to odd natural harmonics of piezoceramic element at ultrasound energy levels sufficient for producing cavitational, thermal or mechanical damage of fat cells at each of aforementioned frequencies. Calculation and FEM modeling of transducer vibrations and acoustic field patterns for different frequencies sets were performed. Acoustic pressure in focal plane was measured in water using calibrated hydrophone and 3D acoustic scanning system. In vitro experiments on different tissues and phantoms confirming the advantages of multifrequency harmonic method were performed.

  1. Damage level prediction of non-reshaped berm breakwater using ANN, SVM and ANFIS models

    NASA Astrophysics Data System (ADS)

    Mandal, Sukomal; Rao, Subba; N., Harish; Lokesha

    2012-06-01

    The damage analysis of coastal structure is very important as it involves many design parameters to be considered for the better and safe design of structure. In the present study experimental data for non-reshaped berm breakwater are collected from Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, NITK, Surathkal, India. Soft computing techniques like Artificial Neural Network (ANN), Support Vector Machine (SVM) and Adaptive Neuro Fuzzy Inference system (ANFIS) models are constructed using experimental data sets to predict the damage level of non-reshaped berm breakwater. The experimental data are used to train ANN, SVM and ANFIS models and results are determined in terms of statistical measures like mean square error, root mean square error, correla-tion coefficient and scatter index. The result shows that soft computing techniques i.e., ANN, SVM and ANFIS can be efficient tools in predicting damage levels of non reshaped berm breakwater.

  2. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  3. Acceleration levels on board the Space Station and a tethered elevator for micro and variable-gravity applications

    NASA Technical Reports Server (NTRS)

    Lorenzini, E. C.; Cosmo, M.; Vetrella, S.; Moccia, A.

    1988-01-01

    This paper investigates the dynamics and acceleration levels of a new tethered system for micro and variable-gravity applications. The system consists of two platforms tethered on opposite sides to the Space Station. A fourth platform, the elevator, is placed in between the Space Station and the upper platform. Variable-g levels on board the elevator are obtained by moving this facility along the upper tether, while micro-g experiments are carried out on board the Space Station. By controlling the length of the lower tether the position of the system CM can be maintained on board the Space Station despite variations of the station's distribution of mass. The paper illustrates the mathematical model, the environmental perturbations and the control techniques which have been adopted for the simulation and control of the system dynamics. Two sets of results from two different simulation runs are shown. The first set shows the system dynamics and the acceleration spectra on board the Space Station and the elevator during station-keeping. The second set of results demonstrates the capability of the elevator to attain a preselected g-level.

  4. Tools for quantifying isotopic niche space and dietary variation at the individual and population level.

    USGS Publications Warehouse

    Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim

    2012-01-01

    Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.

  5. Real-time human versus animal classification using pyro-electric sensor array and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2014-03-01

    In this paper, we propose a real-time human versus animal classification technique using a pyro-electric sensor array and Hidden Markov Model. The technique starts with the variational energy functional level set segmentation technique to separate the object from background. After segmentation, we convert the segmented object to a signal by considering column-wise pixel values and then finding the wavelet coefficients of the signal. HMMs are trained to statistically model the wavelet features of individuals through an expectation-maximization learning process. Human versus animal classifications are made by evaluating a set of new wavelet feature data against the trained HMMs using the maximum-likelihood criterion. Human and animal data acquired-using a pyro-electric sensor in different terrains are used for performance evaluation of the algorithms. Failures of the computationally effective SURF feature based approach that we develop in our previous research are because of distorted images produced when the object runs very fast or if the temperature difference between target and background is not sufficient to accurately profile the object. We show that wavelet based HMMs work well for handling some of the distorted profiles in the data set. Further, HMM achieves improved classification rate over the SURF algorithm with almost the same computational time.

  6. Fuzzy associative memories

    NASA Technical Reports Server (NTRS)

    Kosko, Bart

    1991-01-01

    Mappings between fuzzy cubes are discussed. This level of abstraction provides a surprising and fruitful alternative to the propositional and predicate-calculas reasoning techniques used in expert systems. It allows one to reason with sets instead of propositions. Discussed here are fuzzy and neural function estimators, neural vs. fuzzy representation of structured knowledge, fuzzy vector-matrix multiplication, and fuzzy associative memory (FAM) system architecture.

  7. A Comparison of Professional-Level Faculty and Student Perceptions of Active Learning: Its Current Use, Effectiveness, and Barriers

    ERIC Educational Resources Information Center

    Miller, Cynthia J.; Metz, Michael J.

    2014-01-01

    Active learning is an instructional method in which students become engaged participants in the classroom through the use of in-class written exercises, games, problem sets, audience-response systems, debates, class discussions, etc. Despite evidence supporting the effectiveness of active learning strategies, minimal adoption of the technique has…

  8. An Investigation of Data Privacy and Utility Using Machine Learning as a Gauge

    ERIC Educational Resources Information Center

    Mivule, Kato

    2014-01-01

    The purpose of this investigation is to study and pursue a user-defined approach in preserving data privacy while maintaining an acceptable level of data utility using machine learning classification techniques as a gauge in the generation of synthetic data sets. This dissertation will deal with data privacy, data utility, machine learning…

  9. Automated Scoring of L2 Spoken English with Random Forests

    ERIC Educational Resources Information Center

    Kobayashi, Yuichiro; Abe, Mariko

    2016-01-01

    The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…

  10. Job Analysis Techniques for Restructuring Health Manpower Education and Training in the Navy Medical Department. Attachment 1. Radiation QPCB Task Sort for Radiation.

    ERIC Educational Resources Information Center

    Technomics, Inc., McLean, VA.

    This publication is Attachment 1 of a set of 16 computer listed QPCB task sorts, by career level, for the entire Hospital Corps and Dental Technician fields. Statistical data are presented in tabular form for a detailed listing of job duties in radiation. (BT)

  11. Primary Level Assessment for IASA Title I: A Call for Discussion. Series on Standards and Assessments.

    ERIC Educational Resources Information Center

    Horm-Wingerd, Diane M.; Winter, Phoebe C.; Plofchan, Paula

    The purpose of this paper is twofold: to review appropriate assessment techniques in prekindergarten through grade 3 settings and to serve as a catalyst for further discussion and work on the topic of developmentally appropriate accountability assessment. The discussion is based on the thesis that developmentally appropriate assessment and…

  12. Effectiveness of a Brief Intervention Using Process-Based Mental Simulations in Promoting Muscular Strength in Physical Education

    ERIC Educational Resources Information Center

    Koka, Andre

    2017-01-01

    This study examined the effectiveness of a brief theory-based intervention on muscular strength among adolescents in a physical education setting. The intervention adopted a process-based mental simulation technique. The self-reported frequency of practising for and actual levels of abdominal muscular strength/endurance as one component of…

  13. "MSTGen": Simulated Data Generator for Multistage Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2013-01-01

    Multistage testing, or MST, was developed as an alternative to computerized adaptive testing (CAT) for applications in which it is preferable to administer a test at the level of item sets (i.e., modules). As with CAT, the simulation technique in MST plays a critical role in the development and maintenance of tests. "MSTGen," a new MST…

  14. Ex Ovo Model for Directly Visualizing Chick Embryo Development

    ERIC Educational Resources Information Center

    Dorrell, Michael I.; Marcacci, Michael; Bravo, Stephen; Kurz, Troy; Tremblay, Jacob; Rusing, Jack C.

    2012-01-01

    We describe a technique for removing and growing chick embryos in culture that utilizes relatively inexpensive materials and requires little space. It can be readily performed in class by university, high school, or junior high students, and teachers of any grade level should be able to set it up for their students. Students will be able to…

  15. Climbing up the Leaderboard: An Empirical Study of Applying Gamification Techniques to a Computer Programming Class

    ERIC Educational Resources Information Center

    Fotaris, Panagiotis; Mastoras, Theodoros; Leinfellner, Richard; Rosunally, Yasmine

    2016-01-01

    Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment…

  16. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    NASA Technical Reports Server (NTRS)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  17. Feasibility Assessment of a Structurally Closable, Automatable Technique, for the Deglycerolization of Frozen RBC (Human). Phase 2

    DTIC Science & Technology

    1996-04-01

    to biocompatible levels. associated with collection, testing , inventory and trans- Thawed cells are processed in order to reduce both the glycerol... tested . The instrument is capable of prediluting and washing up to two units of blood per tube set. Our results show that when the performance of our...concentration of glycerol has to be reduced to biocompatible levels (from 1.57 M to less than 0.1 M). Freezing and thawing red blood cells leads to

  18. Phenotyping Drought Tolerance and Yield Potential of Warm-Season Legumes Through Field- and Airborne-Based Hyperspectral VSWIR Sensing

    NASA Astrophysics Data System (ADS)

    Drewry, D.; Berny-Mier y Teran, J. C.; Dutta, D.; Gepts, P.

    2017-12-01

    Hyperspectral sensing in the visible through shortwave infrared (VSWIR) portion of the spectrum has been demonstrated to provide significant information on the structural and functional properties of vegetation, resulting in powerful techniques to discern species differences, characterize crop nutrient or water stress, and quantify the density of foliage in agricultural fields. Modern machine-learning techniques allow for the entire set of spectral bands, on the order of hundreds with modern field and airborne spectrometers, to be used to develop models that can simultaneously retrieve a variety of foliar chemical compounds and hydrological and structural states. The application of these techniques, in the context of leaf-level measurements of VSWIR reflectance, or more complicated remote airborne surveys, has the potential to revolutionize high-throughput methods to phenotype germplasm that optimizes yield, resource-use efficiencies, or alternate objectives related to disease resistance or biomass accumulation, for example. Here we focus on breeding trials for a set of warm-season legumes, conducted in both greenhouse and field settings, and spanning a set of diverse genotypes providing a range of adaptation to drought and yield potential in the context of the semi-arid climate cultivation. At the leaf-level, a large set of spectral reflectance measurements spanning 400-2500 nanometers were made for plants across various growth stages in field experiments that induced severe drought, along with sampling for relevant trait values. Here we will discuss the development and performance of algorithms for a range of leaf traits related to gas exchange, leaf structure, hydrological status, nutrient contents and stable isotope discrimination, along with their relationships to drought resistance and yield. We likewise discuss the effectiveness of quantifying relevant foliar and canopy traits through airborne imaging spectroscopy from small unmanned vehicles (sUAVs), and future directions that augment VSWIR spectral coverage to include the thermal infrared portion of the spectrum, including our recent efforts to accurately retrieve vegetation surface temperature and estimate consumptive water use in agricultural systems throughout the diurnal cycle.

  19. Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschamps, T; Schwartz, P; Trebotich, D

    In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less

  20. Application of class-modelling techniques to infrared spectra for analysis of pork adulteration in beef jerkys.

    PubMed

    Kuswandi, Bambang; Putri, Fitra Karima; Gani, Agus Abdul; Ahmad, Musa

    2015-12-01

    The use of chemometrics to analyse infrared spectra to predict pork adulteration in the beef jerky (dendeng) was explored. In the first step, the analysis of pork in the beef jerky formulation was conducted by blending the beef jerky with pork at 5-80 % levels. Then, they were powdered and classified into training set and test set. The second step, the spectra of the two sets was recorded by Fourier Transform Infrared (FTIR) spectroscopy using atenuated total reflection (ATR) cell on the basis of spectral data at frequency region 4000-700 cm(-1). The spectra was categorised into four data sets, i.e. (a) spectra in the whole region as data set 1; (b) spectra in the fingerprint region (1500-600 cm(-1)) as data set 2; (c) spectra in the whole region with treatment as data set 3; and (d) spectra in the fingerprint region with treatment as data set 4. The third step, the chemometric analysis were employed using three class-modelling techniques (i.e. LDA, SIMCA, and SVM) toward the data sets. Finally, the best result of the models towards the data sets on the adulteration analysis of the samples were selected and the best model was compared with the ELISA method. From the chemometric results, the LDA model on the data set 1 was found to be the best model, since it could classify and predict 100 % accuracy of the sample tested. The LDA model was applied toward the real samples of the beef jerky marketed in Jember, and the results showed that the LDA model developed was in good agreement with the ELISA method.

  1. What is the perceived impact of Alexander technique lessons on health status, costs and pain management in the real life setting of an English hospital? The results of a mixed methods evaluation of an Alexander technique service for those with chronic back pain.

    PubMed

    McClean, Stuart; Brilleman, Sam; Wye, Lesley

    2015-07-28

    Randomised controlled trial evidence indicates that Alexander Technique is clinically and cost effective for chronic back pain. The aim of this mixed methods evaluation was to explore the role and perceived impact of Alexander Technique lessons in the naturalistic setting of an acute hospital Pain Management Clinic in England. To capture changes in health status and resource use amongst service users, 43 service users were administered three widely used questionnaires (Brief Pain Inventory, MYMOP and Client Service Resource Inventory) at three time points: baseline, six weeks and three months after baseline. We also carried out 27 telephone interviews with service users and seven face-to-face interviews with pain clinic staff and Alexander Technique teachers. Quantitative data were analysed using descriptive statistics and qualitative data were analysed thematically. Those taking Alexander Technique lessons reported small improvements in health outcomes, and condition-related costs fell. However, due to the non-randomised, uncontrolled nature of the study design, changes cannot be attributed to the Alexander Technique lessons. Service users stated that their relationship to pain and pain management had changed, especially those who were more committed to practising the techniques regularly. These changes may explain the reported reduction in pain-related service use and the corresponding lower associated costs. Alexander Technique lessons may be used as another approach to pain management. The findings suggests that Alexander Technique lessons can help improve self-efficacy for those who are sufficiently motivated, which in turn may have an impact on service utilisation levels.

  2. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  3. Use of structured personality survey techniques to indicate operator response to stressful situations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waller, M.A.

    Under given circumstances, a person will tend to operate in one of four dominant orientations: (1) to perform tasks; (2) to achieve consensus; (3) to achieve understanding, or (4) to maintain structure. Historically, personality survey techniques, such as the Myers-Briggs type indicator, have been used to determine these tendencies. While these techniques can accurately reflect a person's orientation under normal social situations, under different sets of conditions, the same person may exhibit other tendencies, displaying a similar or entirely different orientation. While most do not exhibit extreme tendencies or changes of orientation, the shift in personality from normal to stressfulmore » conditions can be rather dramatic, depending on the individual. Structured personality survey techniques have been used to indicate operator response to stressful situations. These techniques have been extended to indicate the balance between orientations that the control room team has through the various levels of cognizance.« less

  4. Quantification of crew workload imposed by communications-related tasks in commercial transport aircraft

    NASA Technical Reports Server (NTRS)

    Acton, W. H.; Crabtree, M. S.; Simons, J. C.; Gomer, F. E.; Eckel, J. S.

    1983-01-01

    Information theoretic analysis and subjective paired-comparison and task ranking techniques were employed in order to scale the workload of 20 communications-related tasks frequently performed by the captain and first officer of transport category aircraft. Tasks were drawn from taped conversations between aircraft and air traffic controllers (ATC). Twenty crewmembers performed subjective message comparisons and task rankings on the basis of workload. Information theoretic results indicated a broad range of task difficulty levels, and substantial differences between captain and first officer workload levels. Preliminary subjective data tended to corroborate these results. A hybrid scale reflecting the results of both the analytical and the subjective techniques is currently being developed. The findings will be used to select representative sets of communications for use in high fidelity simulation.

  5. Goal-setting in diabetes self-management: A systematic review and meta-analysis examining content and effectiveness of goal-setting interventions.

    PubMed

    Fredrix, Milou; McSharry, Jenny; Flannery, Caragh; Dinneen, Sean; Byrne, Molly

    2018-03-02

    Goal-setting is recommended and widely used within diabetes self-management programmes. However, empirical evidence around its effectiveness lacks clarity. This review aims to evaluate the effectiveness of goal-setting interventions on diabetes outcomes and to determine which behaviour change techniques (BCTs) are frequently used within these interventions. A systematic search identified 14 studies, describing 12 interventions targeting diabetic-control which incorporated goal-setting as the main intervention strategy. Study characteristics, outcome measures and effect sizes of the included studies were extracted and checked by two authors. The BCT taxonomy v1 was used to identify intervention content. Meta-analyses were conducted to assess intervention effects on the primary outcome of average blood glucose levels (HbA1c) and on body-weight. Psycho-social and behavioural outcomes were summarised in narrative syntheses. Significant post-intervention improvements in HbA1C were found (-.22, 95% CI, -.40, -.04) across studies. No other main effects were identified. The BCT 'goal-setting (behaviour)' was most frequently implemented and was identified in 84% of the interventions. Goal-setting interventions appear to be associated with reduced HbA1C levels. However, the low numbers of studies identified and the risk biases across studies suggest more research is needed to further explore goal-setting BCTs in diabetes self-management.

  6. Effective in-service training design and delivery: evidence from an integrative literature review.

    PubMed

    Bluestone, Julia; Johnson, Peter; Fullerton, Judith; Carr, Catherine; Alderman, Jessica; BonTempo, James

    2013-10-01

    In-service training represents a significant financial investment for supporting continued competence of the health care workforce. An integrative review of the education and training literature was conducted to identify effective training approaches for health worker continuing professional education (CPE) and what evidence exists of outcomes derived from CPE. A literature review was conducted from multiple databases including PubMed, the Cochrane Library and Cumulative Index to Nursing and Allied Health Literature (CINAHL) between May and June 2011. The initial review of titles and abstracts produced 244 results. Articles selected for analysis after two quality reviews consisted of systematic reviews, randomized controlled trials (RCTs) and programme evaluations published in peer-reviewed journals from 2000 to 2011 in the English language. The articles analysed included 37 systematic reviews and 32 RCTs. The research questions focused on the evidence supporting educational techniques, frequency, setting and media used to deliver instruction for continuing health professional education. The evidence suggests the use of multiple techniques that allow for interaction and enable learners to process and apply information. Case-based learning, clinical simulations, practice and feedback are identified as effective educational techniques. Didactic techniques that involve passive instruction, such as reading or lecture, have been found to have little or no impact on learning outcomes. Repetitive interventions, rather than single interventions, were shown to be superior for learning outcomes. Settings similar to the workplace improved skill acquisition and performance. Computer-based learning can be equally or more effective than live instruction and more cost efficient if effective techniques are used. Effective techniques can lead to improvements in knowledge and skill outcomes and clinical practice behaviours, but there is less evidence directly linking CPE to improved clinical outcomes. Very limited quality data are available from low- to middle-income countries. Educational techniques are critical to learning outcomes. Targeted, repetitive interventions can result in better learning outcomes. Setting should be selected to support relevant and realistic practice and increase efficiency. Media should be selected based on the potential to support effective educational techniques and efficiency of instruction. CPE can lead to improved learning outcomes if effective techniques are used. Limited data indicate that there may also be an effect on improving clinical practice behaviours. The research agenda calls for well-constructed evaluations of culturally appropriate combinations of technique, setting, frequency and media, developed for and tested among all levels of health workers in low- and middle-income countries.

  7. Effective in-service training design and delivery: evidence from an integrative literature review

    PubMed Central

    2013-01-01

    Background In-service training represents a significant financial investment for supporting continued competence of the health care workforce. An integrative review of the education and training literature was conducted to identify effective training approaches for health worker continuing professional education (CPE) and what evidence exists of outcomes derived from CPE. Methods A literature review was conducted from multiple databases including PubMed, the Cochrane Library and Cumulative Index to Nursing and Allied Health Literature (CINAHL) between May and June 2011. The initial review of titles and abstracts produced 244 results. Articles selected for analysis after two quality reviews consisted of systematic reviews, randomized controlled trials (RCTs) and programme evaluations published in peer-reviewed journals from 2000 to 2011 in the English language. The articles analysed included 37 systematic reviews and 32 RCTs. The research questions focused on the evidence supporting educational techniques, frequency, setting and media used to deliver instruction for continuing health professional education. Results The evidence suggests the use of multiple techniques that allow for interaction and enable learners to process and apply information. Case-based learning, clinical simulations, practice and feedback are identified as effective educational techniques. Didactic techniques that involve passive instruction, such as reading or lecture, have been found to have little or no impact on learning outcomes. Repetitive interventions, rather than single interventions, were shown to be superior for learning outcomes. Settings similar to the workplace improved skill acquisition and performance. Computer-based learning can be equally or more effective than live instruction and more cost efficient if effective techniques are used. Effective techniques can lead to improvements in knowledge and skill outcomes and clinical practice behaviours, but there is less evidence directly linking CPE to improved clinical outcomes. Very limited quality data are available from low- to middle-income countries. Conclusions Educational techniques are critical to learning outcomes. Targeted, repetitive interventions can result in better learning outcomes. Setting should be selected to support relevant and realistic practice and increase efficiency. Media should be selected based on the potential to support effective educational techniques and efficiency of instruction. CPE can lead to improved learning outcomes if effective techniques are used. Limited data indicate that there may also be an effect on improving clinical practice behaviours. The research agenda calls for well-constructed evaluations of culturally appropriate combinations of technique, setting, frequency and media, developed for and tested among all levels of health workers in low- and middle-income countries. PMID:24083659

  8. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.

  9. Fabrication of cermet bearings for the control system of a high temperature lithium cooled nuclear reactor

    NASA Technical Reports Server (NTRS)

    Yacobucci, H. G.; Heestand, R. L.; Kizer, D. E.

    1973-01-01

    The techniques used to fabricate cermet bearings for the fueled control drums of a liquid metal cooled reference-design reactor concept are presented. The bearings were designed for operation in lithium for as long as 5 years at temperatures to 1205 C. Two sets of bearings were fabricated from a hafnium carbide - 8-wt. % molybdenum - 2-wt. % niobium carbide cermet, and two sets were fabricated from a hafnium nitride - 10-wt. % tungsten cermet. Procedures were developed for synthesizing the material in high purity inert-atmosphere glove boxes to minimize oxygen content in order to enhance corrosion resistance. Techniques were developed for pressing cylindrical billets to conserve materials and to reduce machining requirements. Finishing was accomplished by a combination of diamond grinding, electrodischarge machining, and diamond lapping. Samples were characterized in respect to composition, impurity level, lattice parameter, microstructure and density.

  10. Aerodynamic configuration design using response surface methodology analysis

    NASA Technical Reports Server (NTRS)

    Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit

    1993-01-01

    An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.

  11. RF Single Electron Transistor Readout Amplifiers for Superconducting Astronomical Detectors for X-Ray to Sub-mm Wavelengths

    NASA Technical Reports Server (NTRS)

    Stevenson, Thomas; Aassime, Abdelhanin; Delsing, Per; Frunzio, Luigi; Li, Li-Qun; Prober, Daniel; Schoelkopf, Robert; Segall, Ken; Wilson, Chris; Stahle, Carl

    2000-01-01

    We report progress on using a new type of amplifier, the Radio-Frequency Single-Electron Transistor (RF-SET), to develop multi-channel sensor readout systems for fast and sensitive readout of high impedance cryogenic photodetectors such as Superconducting Tunnel Junctions and Single Quasiparticle Photon Counters. Although cryogenic, these detectors are desirable because of capabilities not other-wise attainable. However, high impedances and low output levels make low-noise, high-speed readouts challenging, and large format arrays would be facilitated by compact, low-power, on-chip integrated amplifiers. Well-suited for this application are RF-SETs, very high performance electrometers which use an rf readout technique to provide 100 MHz bandwidth. Small size, low power, and cryogenic operation allow direct integration with detectors, and using multiple rf carrier frequencies permits simultaneous readout of 20-50 amplifiers with a common electrical connection. We describe both the first 2-channel demonstration of this wavelength division multiplexing technique for RF-SETs, and Charge-Locked-Loop operation with 100 kHz of closed-loop bandwidth.

  12. Workload: Measurement and Management

    NASA Technical Reports Server (NTRS)

    Gore, Brian Francis; Casner, Stephen

    2010-01-01

    Poster: The workload research project has as its task to survey the available literature on: (1) workload measurement techniques; and (2) the effects of workload on operator performance. The first set of findings provides practitioners with a collection of simple-to-use workload measurement techniques along with characterizations of the kinds of tasks each technique has been shown reliably address. This allows design practitioners to select and use the most appropriate techniques for the task(s) at hand. The second set of findings provides practitioners with the guidance they need to design for appropriate kinds and amounts of workload across all tasks for which the operator is responsible. This guidance helps practitioners design systems and procedures that ensure appropriate levels of engagement across all tasks, and avoid designs and procedures that result in operator boredom, complacency, loss of awareness, undue levels of stress, or skill atrophy that can result from workload that distracts operators from the tasks they perform and monitor, workload levels that are too low, too high, or too consistent or predictable. Only those articles that were peer reviewed, long standing and generally accepted in the field, and applicable to a relevant range of conditions in a select domain of interest, in analogous "extreme" environments to those in space were included. In addition, all articles were reviewed and evaluated on uni-dimensional and multi-dimensional considerations. Casner & Gore also examined the notion of thresholds and the conditions that may benefit mostly from the various methodological approaches. Other considerations included whether the tools would be suitable for guiding a requirement-related and design-related question. An initial review of over 225 articles was conducted and entered into an EndNote database. The reference list included a range of conditions in the domain of interest (subjective/objective measures), the seminal works in workload, as well as summary works

  13. Behavior driven testing in ALMA telescope calibration software

    NASA Astrophysics Data System (ADS)

    Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang

    2016-07-01

    ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.

  14. Word game bingo: a behavioral treatment package for improving textual responding to sight words.

    PubMed Central

    Kirby, K C; Holborn, S W; Bushby, H T

    1981-01-01

    Six third-grade students identified as deficient in reading skills tested the efficacy of word game bingo for acquisition and retention of sight word reading. The design was a modified multiple baseline in which treatment was implemented over 3 of 4 word sets and terminated on earlier sets when commencing treatment on later sets. Four sets of bingo cards were constructed on 7 X 9 cm paper divided into 25 equal-sized boxes. Sight words of each set were randomly placed into 24 of these boxes (the center box was marked "free"). Bingo winners were given tokens which were traded weekly for reinforcing activities. Noticeable improvements occurred for the word sets receiving the game treatment (sets A to C). Mean percentage points of improvement from baseline to treatment were approximately 30%. Terminal levels of correct responding exceeded 90%. Several variations of the game were suggested for future research and word game bingo was advocated as an effective behavioral technique or teachers to train sight word reading. PMID:7298541

  15. Variability of Plyometric and Ballistic Exercise Technique Maintains Jump Performance.

    PubMed

    Chandler, Phillip T; Greig, Matthew; Comfort, Paul; McMahon, John J

    2018-06-01

    Chandler, PT, Greig, M, Comfort, P, and McMahon, JJ. Variability of plyometric and ballistic exercise technique maintains jump performance. J Strength Cond Res 32(6): 1571-1582, 2018-The aim of this study was to investigate changes in vertical jump technique over the course of a training session. Twelve plyometric and ballistic exercise-trained male athletes (age = 23.4 ± 4.6 years, body mass = 78.7 ± 18.8 kg, height = 177.1 ± 9.0 cm) performed 3 sets of 10 repetitions of drop jump (DJ), rebound jump (RJ) and squat jump (SJ). Each exercise was analyzed from touchdown to peak joint flexion and peak joint flexion to take-off. Squat jump was analyzed from peak joint flexion to take-off only. Jump height, flexion and extension time and range of motion, and instantaneous angles of the ankle, knee, and hip joints were measured. Separate 1-way repeated analyses of variance compared vertical jump technique across exercise sets and repetitions. Exercise set analysis found that SJ had lower results than DJ and RJ for the angle at peak joint flexion for the hip, knee, and ankle joints and take-off angle of the hip joint. Exercise repetition analysis found that the ankle joint had variable differences for the angle at take-off, flexion, and extension time for RJ. The knee joint had variable differences for flexion time for DJ and angle at take-off and touchdown for RJ. There was no difference in jump height. Variation in measured parameters across repetitions highlights variable technique across plyometric and ballistic exercises. This did not affect jump performance, but likely maintained jump performance by overcoming constraints (e.g., level of rate coding).

  16. Selection of muscle and nerve-cuff electrodes for neuroprostheses using customizable musculoskeletal model.

    PubMed

    Blana, Dimitra; Hincapie, Juan G; Chadwick, Edward K; Kirsch, Robert F

    2013-01-01

    Neuroprosthetic systems based on functional electrical stimulation aim to restore motor function to individuals with paralysis following spinal cord injury. Identifying the optimal electrode set for the neuroprosthesis is complicated because it depends on the characteristics of the individual (such as injury level), the force capacities of the muscles, the movements the system aims to restore, and the hardware limitations (number and type of electrodes available). An electrode-selection method has been developed that uses a customized musculoskeletal model. Candidate electrode sets are created based on desired functional outcomes and the hard ware limitations of the proposed system. Inverse-dynamic simulations are performed to determine the proportion of target movements that can be accomplished with each set; the set that allows the most movements to be performed is chosen as the optimal set. The technique is demonstrated here for a system recently developed by our research group to restore whole-arm movement to individuals with high-level tetraplegia. The optimal set included selective nerve-cuff electrodes for the radial and musculocutaneous nerves; single-channel cuffs for the axillary, suprascapular, upper subscapular, and long-thoracic nerves; and muscle-based electrodes for the remaining channels. The importance of functional goals, hardware limitations, muscle and nerve anatomy, and surgical feasibility are highlighted.

  17. Job Analysis Techniques for Restructuring Health Manpower Education and Training in the Navy Medical Department. Attachment 9. Laboratory QPCB Task Sort for Medical Laboratory Technology.

    ERIC Educational Resources Information Center

    Technomics, Inc., McLean, VA.

    This publication is Attachment 9 of a set of 16 computer listed QPCB task sorts, by career level, for the entire Hospital Corps and Dental Technician fields. Statistical data are presented in tabular form for a detailed listing of job duties in medical laboratory technology. (BT)

  18. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  19. An alert system for triggering different levels of coastal management urgency: Tunisia case study using rapid environmental assessment data.

    PubMed

    Price, A R G; Jaoui, K; Pearson, M P; Jeudy de Grissac, A

    2014-03-15

    Rapid environmental assessment (REA) involves scoring abundances of ecosystems/species groups and magnitude of pressures, concurrently, using the same logarithmic (0-6) assessment scale. We demonstrate the utility of REA data for an alert system identifying different levels of coastal management concern. Thresholds set for abundances/magnitudes, when crossed, trigger proposed responses. Kerkennah, Tunisia, our case study, has significant natural assets (e.g. exceptional seagrass and invertebrate abundances), subjected to varying levels of disturbance and management concern. Using REA thresholds set, fishing, green algae/eutrophication and oil occurred at 'low' levels (scores 0-1): management not (currently) necessary. Construction and wood litter prevailed at 'moderate' levels (scores 2-4): management alerted for (further) monitoring. Solid waste densities were 'high' (scores 5-6): management alerted for action; quantities of rubbish were substantial (20-200 items m⁻¹ beach) but not unprecedented. REA is considered a robust methodology and complementary to other rapid assessment techniques, environmental frameworks and indicators of ecosystem condition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    PubMed

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  1. GRID and Multiphonon States

    PubMed Central

    Robinson, S. J.

    2000-01-01

    The development of the GRID technique for determining nuclear level lifetimes of excited low-spin states populated in thermal neutron capture reactions has resulted in the ability to perform detailed studies of proposed multiphonon excitations for the first time. This paper discusses the experimental evidence for multiphonon excitations determined using the GRID technique. In deformed nuclei several good examples of γγKπ = 4+ excitations have been established, whereas the experimental evidence gathered on Kπ= 0+ bands is contradictory, and any interpretations will likely involve the mixing of several different configurations. In vibrational nuclei the GRID technique has helped to establish the existence of multiple quadrupole phonon excitations in 114Cd, and an almost complete set of quadrupole-octupole coupled states in 144Nd. PMID:27551594

  2. True Color Image Analysis For Determination Of Bone Growth In Fluorochromic Biopsies

    NASA Astrophysics Data System (ADS)

    Madachy, Raymond J.; Chotivichit, Lee; Huang, H. K.; Johnson, Eric E.

    1989-05-01

    A true color imaging technique has been developed for analysis of microscopic fluorochromic bone biopsy images to quantify new bone growth. The technique searches for specified colors in a medical image for quantification of areas of interest. Based on a user supplied training set, a multispectral classification of pixel values is performed and used for segmenting the image. Good results were obtained when compared to manual tracings of new bone growth performed by an orthopedic surgeon. At a 95% confidence level, the hypothesis that there is no difference between the two methods can be accepted. Work is in progress to test bone biopsies with different colored stains and further optimize the analysis process using three-dimensional spectral ordering techniques.

  3. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  4. Rough Set Approach to Incomplete Multiscale Information System

    PubMed Central

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  5. Take the Reins on Model Quality with ModelCHECK and Gatekeeper

    NASA Technical Reports Server (NTRS)

    Jones, Corey

    2012-01-01

    Model quality and consistency has been an issue for us due to the diverse experience level and imaginative modeling techniques of our users. Fortunately, setting up ModelCHECK and Gatekeeper to enforce our best practices has helped greatly, but it wasn't easy. There were many challenges associated with setting up ModelCHECK and Gatekeeper including: limited documentation, restrictions within ModelCHECK, and resistance from end users. However, we consider ours a success story. In this presentation we will describe how we overcame these obstacles and present some of the details of how we configured them to work for us.

  6. Effect of Liquid Penetrant Sensitivity on Probability of Detection

    NASA Technical Reports Server (NTRS)

    Parker, Bradford H.

    2011-01-01

    The objective of the task is to investigate the effect of liquid penetrant sensitivity level on probability of detection (POD) of cracks in various metals. NASA-STD-5009 currently requires the use of only sensitivity level 4 liquid penetrants for NASA Standard Level inspections. This requirement is based on the fact that the data used to establish the reliably detectable flaw sizes penetrant inspection was from studies performed in the 1970s using penetrant deemed to be equivalent only to modern day sensitivity level 4 penetrants. However, many NDE contractors supporting NASA Centers routinely use sensitivity level 3 penetrants. Because of the new NASA-STD-5009 requirement, these contractors will have to either shift to sensitivity level 4 penetrants or perform formal POD demonstration tests to qualify their existing process. We propose a study to compare the POD generated for two penetrant manufactures, Sherwin and Magnaflux, and for the two most common penetrant inspection methods, water washable and post emulsifiable, hydrophilic. NDE vendors local to GSFC will be employed. A total of six inspectors will inspect a set of crack panels with a broad range of fatigue crack sizes. Each inspector will perform eight inspections of the panel set using the combination of methods and sensitivity levels described above. At least one inspector will also perform multiple inspections using a fixed technique to investigate repeatability. The hit/miss data sets will be evaluated using both the NASA generated DOEPOD software and the MIL-STD-1823 software.

  7. Does gastric bypass surgery change body weight set point?

    PubMed Central

    Hao, Z; Mumphrey, M B; Morrison, C D; Münzberg, H; Ye, J; Berthoud, H R

    2016-01-01

    The relatively stable body weight during adulthood is attributed to a homeostatic regulatory mechanism residing in the brain which uses feedback from the body to control energy intake and expenditure. This mechanism guarantees that if perturbed up or down by design, body weight will return to pre-perturbation levels, defined as the defended level or set point. The fact that weight re-gain is common after dieting suggests that obese subjects defend a higher level of body weight. Thus, the set point for body weight is flexible and likely determined by the complex interaction of genetic, epigenetic and environmental factors. Unlike dieting, bariatric surgery does a much better job in producing sustained suppression of food intake and body weight, and an intensive search for the underlying mechanisms has started. Although one explanation for this lasting effect of particularly Roux-en-Y gastric bypass surgery (RYGB) is simple physical restriction due to the invasive surgery, a more exciting explanation is that the surgery physiologically reprograms the body weight defense mechanism. In this non-systematic review, we present behavioral evidence from our own and other studies that defended body weight is lowered after RYGB and sleeve gastrectomy. After these surgeries, rodents return to their preferred lower body weight if over- or underfed for a period of time, and the ability to drastically increase food intake during the anabolic phase strongly argues against the physical restriction hypothesis. However, the underlying mechanisms remain obscure. Although the mechanism involves central leptin and melanocortin signaling pathways, other peripheral signals such as gut hormones and their neural effector pathways likely contribute. Future research using both targeted and non-targeted ‘omics’ techniques in both humans and rodents as well as modern, genetically targeted, neuronal manipulation techniques in rodents will be necessary. PMID:28685029

  8. Relationships of pediatric anthropometrics for CT protocol selection.

    PubMed

    Phillips, Grace S; Stanescu, Arta-Luana; Alessio, Adam M

    2014-07-01

    Determining the optimal CT technique to minimize patient radiation exposure while maintaining diagnostic utility requires patient-specific protocols that are based on patient characteristics. This work develops relationships between different anthropometrics and CT image noise to determine appropriate protocol classification schemes. We measured the image noise in 387 CT examinations of pediatric patients (222 boys, 165 girls) of the chest, abdomen, and pelvis and generated mathematic relationships between image noise and patient lateral and anteroposterior dimensions, age, and weight. At the chest level, lateral distance (ld) across the body is strongly correlated with weight (ld = 0.23 × weight + 16.77; R(2) = 0.93) and is less well correlated with age (ld = 1.10 × age + 17.13; R(2) = 0.84). Similar trends were found for anteroposterior dimensions and at the abdomen level. Across all studies, when acquisition-specific parameters are factored out of the noise, the log of image noise was highly correlated with lateral distance (R(2) = 0.72) and weight (R(2) = 0.72) and was less correlated with age (R(2) = 0.62). Following first-order relationships of image noise and scanner technique, plots were formed to show techniques that could achieve matched noise across the pediatric population. Patient lateral distance and weight are essentially equally effective metrics to base maximum technique settings for pediatric patient-specific protocols. These metrics can also be used to help categorize appropriate reference levels for CT technique and size-specific dose estimates across the pediatric population.

  9. Practical aspects of inhaler use in the management of chronic obstructive pulmonary disease in the primary care setting.

    PubMed

    Yawn, Barbara P; Colice, Gene L; Hodder, Rick

    2012-01-01

    Sustained bronchodilation using inhaled medications in moderate to severe chronic obstructive pulmonary disease (COPD) grades 2 and 3 (Global Initiative for Chronic Obstructive Lung Disease guidelines) has been shown to have clinical benefits on long-term symptom control and quality of life, with possible additional benefits on disease progression and longevity. Aggressive diagnosis and treatment of symptomatic COPD is an integral and pivotal part of COPD management, which usually begins with primary care physicians. The current standard of care involves the use of one or more inhaled bronchodilators, and depending on COPD severity and phenotype, inhaled corticosteroids. There is a wide range of inhaler devices available for delivery of inhaled medications, but suboptimal inhaler use is a common problem that can limit the clinical effectiveness of inhaled therapies in the real-world setting. Patients' comorbidities, other physical or mental limitations, and the level of inhaler technique instruction may limit proper inhaler use. This paper presents information that can overcome barriers to proper inhaler use, including issues in device selection, steps in correct technique for various inhaler devices, and suggestions for assessing and monitoring inhaler techniques. Ensuring proper inhaler technique can maximize drug effectiveness and aid clinical management at all grades of COPD.

  10. Practical aspects of inhaler use in the management of chronic obstructive pulmonary disease in the primary care setting

    PubMed Central

    Yawn, Barbara P; Colice, Gene L; Hodder, Rick

    2012-01-01

    Sustained bronchodilation using inhaled medications in moderate to severe chronic obstructive pulmonary disease (COPD) grades 2 and 3 (Global Initiative for Chronic Obstructive Lung Disease guidelines) has been shown to have clinical benefits on long-term symptom control and quality of life, with possible additional benefits on disease progression and longevity. Aggressive diagnosis and treatment of symptomatic COPD is an integral and pivotal part of COPD management, which usually begins with primary care physicians. The current standard of care involves the use of one or more inhaled bronchodilators, and depending on COPD severity and phenotype, inhaled corticosteroids. There is a wide range of inhaler devices available for delivery of inhaled medications, but suboptimal inhaler use is a common problem that can limit the clinical effectiveness of inhaled therapies in the real-world setting. Patients’ comorbidities, other physical or mental limitations, and the level of inhaler technique instruction may limit proper inhaler use. This paper presents information that can overcome barriers to proper inhaler use, including issues in device selection, steps in correct technique for various inhaler devices, and suggestions for assessing and monitoring inhaler techniques. Ensuring proper inhaler technique can maximize drug effectiveness and aid clinical management at all grades of COPD. PMID:22888221

  11. Application of Spectral Analysis Techniques in the Intercomparison of Aerosol Data: 1. an EOF Approach to the Spatial-Temporal Variability of Aerosol Optical Depth Using Multiple Remote Sensing Data Sets

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2013-01-01

    Many remote sensing techniques and passive sensors have been developed to measure global aerosol properties. While instantaneous comparisons between pixel-level data often reveal quantitative differences, here we use Empirical Orthogonal Function (EOF) analysis, also known as Principal Component Analysis, to demonstrate that satellite-derived aerosol optical depth (AOD) data sets exhibit essentially the same spatial and temporal variability and are thus suitable for large-scale studies. Analysis results show that the first four EOF modes of AOD account for the bulk of the variance and agree well across the four data sets used in this study (i.e., Aqua MODIS, Terra MODIS, MISR, and SeaWiFS). Only SeaWiFS data over land have slightly different EOF patterns. Globally, the first two EOF modes show annual cycles and are mainly related to Sahara dust in the northern hemisphere and biomass burning in the southern hemisphere, respectively. After removing the mean seasonal cycle from the data, major aerosol sources, including biomass burning in South America and dust in West Africa, are revealed in the dominant modes due to the different interannual variability of aerosol emissions. The enhancement of biomass burning associated with El Niño over Indonesia and central South America is also captured with the EOF technique.

  12. Analysis of Information Content in High-Spectral Resolution Sounders using Subset Selection Analysis

    NASA Technical Reports Server (NTRS)

    Velez-Reyes, Miguel; Joiner, Joanna

    1998-01-01

    In this paper, we summarize the results of the sensitivity analysis and data reduction carried out to determine the information content of AIRS and IASI channels. The analysis and data reduction was based on the use of subset selection techniques developed in the linear algebra and statistical community to study linear dependencies in high dimensional data sets. We applied the subset selection method to study dependency among channels by studying the dependency among their weighting functions. Also, we applied the technique to study the information provided by the different levels in which the atmosphere is discretized for retrievals and analysis. Results from the method correlate well with intuition in many respects and point out to possible modifications for band selection in sensor design and number and location of levels in the analysis process.

  13. Off-Body Boundary-Layer Measurement Techniques Development for Supersonic Low-Disturbance Flows

    NASA Technical Reports Server (NTRS)

    Owens, Lewis R.; Kegerise, Michael A.; Wilkinson, Stephen P.

    2011-01-01

    Investigations were performed to develop accurate boundary-layer measurement techniques in a Mach 3.5 laminar boundary layer on a 7 half-angle cone at 0 angle of attack. A discussion of the measurement challenges is presented as well as how each was addressed. A computational study was performed to minimize the probe aerodynamic interference effects resulting in improved pitot and hot-wire probe designs. Probe calibration and positioning processes were also developed with the goal of reducing the measurement uncertainties from 10% levels to less than 5% levels. Efforts were made to define the experimental boundary conditions for the cone flow so comparisons could be made with a set of companion computational simulations. The development status of the mean and dynamic boundary-layer flow measurements for a nominally sharp cone in a low-disturbance supersonic flow is presented.

  14. What are the most effective intervention techniques for changing physical activity self-efficacy and physical activity behaviour--and are they the same?

    PubMed

    Williams, S L; French, D P

    2011-04-01

    There is convincing evidence that targeting self-efficacy is an effective means of increasing physical activity. However, evidence concerning which are the most effective techniques for changing self-efficacy and thereby physical activity is lacking. The present review aims to estimate the association between specific intervention techniques used in physical activity interventions and change obtained in both self-efficacy and physical activity behaviour. A systematic search yielded 27 physical activity intervention studies for 'healthy' adults that reported self-efficacy and physical activity data. A small, yet significant (P < 0.01) effect of the interventions was found on change in self-efficacy and physical activity (d = 0.16 and 0.21, respectively). When a technique was associated with a change in effect sizes for self-efficacy, it also tended to be associated with a change (r(s) = 0.690, P < 0.001) in effect size for physical activity. Moderator analyses found that 'action planning', 'provide instruction' and 'reinforcing effort towards behaviour' were associated with significantly higher levels of both self-efficacy and physical activity. 'Relapse prevention' and 'setting graded tasks' were associated with significantly lower self-efficacy and physical activity levels. This meta-analysis provides evidence for which psychological techniques are most effective for changing self-efficacy and physical activity.

  15. Construction of Weak and Strong Similarity Measures for Ordered Sets of Documents Using Fuzzy Set Techniques.

    ERIC Educational Resources Information Center

    Egghe, L.; Michel, C.

    2003-01-01

    Ordered sets (OS) of documents are encountered more and more in information distribution systems, such as information retrieval systems. Classical similarity measures for ordinary sets of documents need to be extended to these ordered sets. This is done in this article using fuzzy set techniques. The practical usability of the OS-measures is…

  16. Latent profiles of elite Malaysian athletes' use of psychological skills and techniques and relations with mental toughness.

    PubMed

    Ponnusamy, Vellapandian; Lines, Robin L J; Zhang, Chun-Qing; Gucciardi, Daniel F

    2018-01-01

    The majority of past work on athletes' use of psychological skills and techniques (PSTs) has adopted a variable-centered approach in which the statistical relations among study variables are averaged across a sample. However, variable-centered-analyses exclude the possibility that PSTs may be used in tandem or combined in different ways across practice and competition settings. With this empirical gap in mind, the purposes of this study were to identify the number and type of profiles of elite athletes' use of PSTs, and examine differences between these clusters in terms of their self-reported mental toughness. In this cross-sectional survey study, 285 Malaysian elite athletes (170 males, 115 females) aged 15-44 years ( M = 18.89, SD = 4.49) completed measures of various PSTs and mental toughness. Latent profile analysis was employed to determine the type and number of profiles that best represent athletes' reports of their use of PSTs in practice and competition settings, and examine differences between these classes in terms of self-reported mental toughness. Our results revealed three profiles (low, moderate, high use) in both practice and competition settings that were distinguished primarily according to quantitative differences in the absolute levels of reported use across most of the PSTs assessed in practice and competition settings, which in turn, were differentially related with mental toughness. Specifically, higher use of PSTs was associated with higher levels of mental toughness. This study provides one of the first analyses of the different configurations of athletes' use of PSTs that typify unique subgroups of performers. An important next step is to examine the longitudinal (in) stability of such classes and therefore provide insight into the temporal dynamics of different configurations of athletes' use of PSTs.

  17. Enhancement of face recognition learning in patients with brain injury using three cognitive training procedures.

    PubMed

    Powell, Jane; Letson, Susan; Davidoff, Jules; Valentine, Tim; Greenwood, Richard

    2008-04-01

    Twenty patients with impairments of face recognition, in the context of a broader pattern of cognitive deficits, were administered three new training procedures derived from contemporary theories of face processing to enhance their learning of new faces: semantic association (being given additional verbal information about the to-be-learned faces); caricaturing (presentation of caricatured versions of the faces during training and veridical versions at recognition testing); and part recognition (focusing patients on distinctive features during the training phase). Using a within-subjects design, each training procedure was applied to a different set of 10 previously unfamiliar faces and entailed six presentations of each face. In a "simple exposure" control procedure (SE), participants were given six presentations of another set of faces using the same basic protocol but with no further elaboration. Order of the four procedures was counterbalanced, and each condition was administered on a different day. A control group of 12 patients with similar levels of face recognition impairment were trained on all four sets of faces under SE conditions. Compared to the SE condition, all three training procedures resulted in more accurate discrimination between the 10 studied faces and 10 distractor faces in a post-training recognition test. This did not reflect any intrinsic lesser memorability of the faces used in the SE condition, as evidenced by the comparable performance across face sets by the control group. At the group level, the three experimental procedures were of similar efficacy, and associated cognitive deficits did not predict which technique would be most beneficial to individual patients; however, there was limited power to detect such associations. Interestingly, a pure prosopagnosic patient who was tested separately showed benefit only from the part recognition technique. Possible mechanisms for the observed effects, and implications for rehabilitation, are discussed.

  18. Latent profiles of elite Malaysian athletes’ use of psychological skills and techniques and relations with mental toughness

    PubMed Central

    Lines, Robin L.J.

    2018-01-01

    Background The majority of past work on athletes’ use of psychological skills and techniques (PSTs) has adopted a variable-centered approach in which the statistical relations among study variables are averaged across a sample. However, variable-centered-analyses exclude the possibility that PSTs may be used in tandem or combined in different ways across practice and competition settings. With this empirical gap in mind, the purposes of this study were to identify the number and type of profiles of elite athletes’ use of PSTs, and examine differences between these clusters in terms of their self-reported mental toughness. Methods In this cross-sectional survey study, 285 Malaysian elite athletes (170 males, 115 females) aged 15–44 years (M = 18.89, SD = 4.49) completed measures of various PSTs and mental toughness. Latent profile analysis was employed to determine the type and number of profiles that best represent athletes’ reports of their use of PSTs in practice and competition settings, and examine differences between these classes in terms of self-reported mental toughness. Results Our results revealed three profiles (low, moderate, high use) in both practice and competition settings that were distinguished primarily according to quantitative differences in the absolute levels of reported use across most of the PSTs assessed in practice and competition settings, which in turn, were differentially related with mental toughness. Specifically, higher use of PSTs was associated with higher levels of mental toughness. Conclusion This study provides one of the first analyses of the different configurations of athletes’ use of PSTs that typify unique subgroups of performers. An important next step is to examine the longitudinal (in) stability of such classes and therefore provide insight into the temporal dynamics of different configurations of athletes’ use of PSTs. PMID:29780672

  19. Teachers' and Learners' Perceptions of Applying Translation as a Method, Strategy, or Technique in an Iranian EFL Setting

    ERIC Educational Resources Information Center

    Mollaei, Fatemeh; Taghinezhad, Ali; Sadighi, Firooz

    2017-01-01

    It has been found that translation is an efficient means to teach/learn grammar, syntax, and lexis of a foreign language. Meanwhile, translation is good for beginners who do not still enjoy the critical level of proficiency in their target language for expression. This study was conducted to examine the teachers' and learners' perceptions of…

  20. Interpreting the Results of Weighted Least-Squares Regression: Caveats for the Statistical Consumer.

    ERIC Educational Resources Information Center

    Willett, John B.; Singer, Judith D.

    In research, data sets often occur in which the variance of the distribution of the dependent variable at given levels of the predictors is a function of the values of the predictors. In this situation, the use of weighted least-squares (WLS) or techniques is required. Weights suitable for use in a WLS regression analysis must be estimated. A…

  1. Scheduling Operational Operational-Level Courses of Action

    DTIC Science & Technology

    2003-10-01

    Process modelling and analysis – process synchronisation techniques Information and knowledge management – Collaborative planning systems – Workflow...logistics – Some tasks may consume resources The military user may wish to impose synchronisation constraints among tasks A military end state can be...effects, – constrained with resource and synchronisation considerations, and – lead to the achievement of conditions set in the end state. The COA is

  2. Atmospheric observations for STS-1 landing

    NASA Technical Reports Server (NTRS)

    Turner, R. E.; Arnold, J. E.; Wilson, G. S.

    1981-01-01

    A summary of synoptic weather conditions existing over the western United States is given for the time of shuttle descent into Edwards Air Force Base, California. The techniques and methods used to furnish synoptic atmospheric data at the surface and aloft for flight verification of the STS-1 orbiter during its descent into Edwards Air Force Base are specified. Examples of the upper level data set are given.

  3. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  4. Imaging evidence and recommendations for traumatic brain injury: advanced neuro- and neurovascular imaging techniques.

    PubMed

    Wintermark, M; Sanelli, P C; Anzai, Y; Tsiouris, A J; Whitlow, C T

    2015-02-01

    Neuroimaging plays a critical role in the evaluation of patients with traumatic brain injury, with NCCT as the first-line of imaging for patients with traumatic brain injury and MR imaging being recommended in specific settings. Advanced neuroimaging techniques, including MR imaging DTI, blood oxygen level-dependent fMRI, MR spectroscopy, perfusion imaging, PET/SPECT, and magnetoencephalography, are of particular interest in identifying further injury in patients with traumatic brain injury when conventional NCCT and MR imaging findings are normal, as well as for prognostication in patients with persistent symptoms. These advanced neuroimaging techniques are currently under investigation in an attempt to optimize them and substantiate their clinical relevance in individual patients. However, the data currently available confine their use to the research arena for group comparisons, and there remains insufficient evidence at the time of this writing to conclude that these advanced techniques can be used for routine clinical use at the individual patient level. TBI imaging is a rapidly evolving field, and a number of the recommendations presented will be updated in the future to reflect the advances in medical knowledge. © 2015 by American Journal of Neuroradiology.

  5. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems

    PubMed Central

    Li, Zhilin; Song, Peng

    2013-01-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  6. Differences in pedalling technique between road cyclists of different competitive levels.

    PubMed

    García-López, Juan; Díez-Leal, Sergio; Ogueta-Alday, Ana; Larrazabal, Josu; Rodríguez-Marroyo, José A

    2016-09-01

    The purpose of this study was to compare the pedalling technique in road cyclists of different competitive levels. Eleven professional, thirteen elite and fourteen club cyclists were assessed at the beginning of their competition season. Cyclists' anthropometric characteristics and bike measurements were recorded. Three sets of pedalling (200, 250 and 300 W) on a cycle ergometer that simulated their habitual cycling posture were performed at a constant cadence (~90 rpm), while kinetic and kinematic variables were registered. The results showed no differences on the main anthropometric variables and bike measurements. Professional cyclists obtained higher positive impulse proportion (1.5-3.3% and P < 0.05), mainly due to a lower resistive torque during the upstroke (15.4-28.7% and P < 0.05). They also showed a higher ankle range of movement (ROM, 1.1-4.0° and P < 0.05). Significant correlations (P < 0.05) were found between the cyclists' body mass and the kinetic variables of pedalling: positive impulse proportion (r = -0.59 to -0.61), minimum (r = -0.59 to -0.63) and maximum torques (r = 0.35-0.47). In conclusion, professional cyclists had better pedalling technique than elite and club cyclists, because they opted for enhancing pulling force at the recovery phase to sustain the same power output. This technique depended on cycling experience and level of expertise.

  7. Data and techniques for studying the urban heat island effect in Johannesburg

    NASA Astrophysics Data System (ADS)

    Hardy, C. H.; Nel, A. L.

    2015-04-01

    The city of Johannesburg contains over 10 million trees and is often referred to as an urban forest. The intra-urban spatial variability of the levels of vegetation across Johannesburg's residential regions has an influence on the urban heat island effect within the city. Residential areas with high levels of vegetation benefit from cooling due to evapo-transpirative processes and thus exhibit weaker heat island effects; while their impoverished counterparts are not so fortunate. The urban heat island effect describes a phenomenon where some urban areas exhibit temperatures that are warmer than that of surrounding areas. The factors influencing the urban heat island effect include the high density of people and buildings and low levels of vegetative cover within populated urban areas. This paper describes the remote sensing data sets and the processing techniques employed to study the heat island effect within Johannesburg. In particular we consider the use of multi-sensorial multi-temporal remote sensing data towards a predictive model, based on the analysis of influencing factors.

  8. Laser Techniques in Conservation of Artworks:. Problems and Breakthroughs

    NASA Astrophysics Data System (ADS)

    Salimbeni, Renzo; Siano, Salvatore

    2010-04-01

    After more than thirty years since the first experiment in Venice, only in the last decade laser techniques have been widely recognised as one of the most important innovation introduced in the conservation of artworks for diagnostics, restoration and monitoring aims. Especially the use of laser ablation for the delicate phase of cleaning has been debated for many years, because of the problems encountered in finding an appropriate setting of the laser parameters. Many experimentations carried out on stone, metals and pigments put in evidence unacceptable side effects such as discoloration and yellowing after the treatment, or scarce cleaning productivity in respect of other techniques. Many research projects organised at European level have contributed to find breakthroughs in laser techniques that could avoid such problems. The choices of specific laser parameters better suited for cleaning of stone, metals and pigments are described. A series of validation case studies is reported.

  9. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  10. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

    PubMed

    Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

    2015-01-01

    Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

  11. Using mediation techniques to manage conflict and create healthy work environments.

    PubMed

    Gerardi, Debra

    2004-01-01

    Healthcare organizations must find ways for managing conflict and developing effective working relationships to create healthy work environments. The effects of unresolved conflict on clinical outcomes, staff retention, and the financial health of the organization lead to many unnecessary costs that divert resources from clinical care. The complexity of delivering critical care services makes conflict resolution difficult. Developing collaborative working relationships helps to manage conflict in complex environments. Working relationships are based on the ability to deal with differences. Dealing with differences requires skill development and techniques for balancing interests and communicating effectively. Techniques used by mediators are effective for resolving disputes and developing working relationships. With practice, these techniques are easily transferable to the clinical setting. Listening for understanding, reframing, elevating the definition of the problem, and forming clear agreements can foster working relationships, decrease the level of conflict, and create healthy work environments that benefit patients and professionals.

  12. Using level set based inversion of arrival times to recover shear wave speed in transient elastography and supersonic imaging

    NASA Astrophysics Data System (ADS)

    McLaughlin, Joyce; Renzi, Daniel

    2006-04-01

    Transient elastography and supersonic imaging are promising new techniques for characterizing the elasticity of soft tissues. Using this method, an 'ultrafast imaging' system (up to 10 000 frames s-1) follows in real time the propagation of a low-frequency shear wave. The displacement of the propagating shear wave is measured as a function of time and space. Here we develop a fast level set based algorithm for finding the shear wave speed from the interior positions of the propagating front. We compare the performance of level curve methods developed here and our previously developed (McLaughlin J and Renzi D 2006 Shear wave speed recovery in transient elastography and supersonic imaging using propagating fronts Inverse Problems 22 681-706) distance methods. We give reconstruction examples from synthetic data and from data obtained from a phantom experiment accomplished by Mathias Fink's group (the Laboratoire Ondes et Acoustique, ESPCI, Université Paris VII).

  13. International standards for programmes of training in intensive care medicine in Europe.

    PubMed

    2011-03-01

    To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.

  14. PM10 and gaseous pollutants trends from air quality monitoring networks in Bari province: principal component analysis and absolute principal component scores on a two years and half data set

    PubMed Central

    2014-01-01

    Background The chemical composition of aerosols and particle size distributions are the most significant factors affecting air quality. In particular, the exposure to finer particles can cause short and long-term effects on human health. In the present paper PM10 (particulate matter with aerodynamic diameter lower than 10 μm), CO, NOx (NO and NO2), Benzene and Toluene trends monitored in six monitoring stations of Bari province are shown. The data set used was composed by bi-hourly means for all parameters (12 bi-hourly means per day for each parameter) and it’s referred to the period of time from January 2005 and May 2007. The main aim of the paper is to provide a clear illustration of how large data sets from monitoring stations can give information about the number and nature of the pollutant sources, and mainly to assess the contribution of the traffic source to PM10 concentration level by using multivariate statistical techniques such as Principal Component Analysis (PCA) and Absolute Principal Component Scores (APCS). Results Comparing the night and day mean concentrations (per day) for each parameter it has been pointed out that there is a different night and day behavior for some parameters such as CO, Benzene and Toluene than PM10. This suggests that CO, Benzene and Toluene concentrations are mainly connected with transport systems, whereas PM10 is mostly influenced by different factors. The statistical techniques identified three recurrent sources, associated with vehicular traffic and particulate transport, covering over 90% of variance. The contemporaneous analysis of gas and PM10 has allowed underlining the differences between the sources of these pollutants. Conclusions The analysis of the pollutant trends from large data set and the application of multivariate statistical techniques such as PCA and APCS can give useful information about air quality and pollutant’s sources. These knowledge can provide useful advices to environmental policies in order to reach the WHO recommended levels. PMID:24555534

  15. Subelectron readout noise focal plane arrays for space imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark

    2004-01-01

    Readout noise levels of under 1 electron have long been a goal for the FPA community. In the quest to enhance the FPA sensitivity, various approaches have been attempted ranging from the exotic Photo-multiplier tubes, Image Intensifier tubes, Avalanche photo diodes, and now the on-chip avalanche charge amplification technologies from the CCD manufacturers. While these techniques reduce the readout noise, each offers a set of compromises that negatively affect the overall performance of the sensor in parameters such as power dissipation, dynamic range, uniformity or system complexity. In this work, we overview the benefits and tradeoffs of each approach, and introduce a new technique based on ImagerLabs" exclusive HIT technology which promises sub-electron read noise and other benefits without the tradeoffs of the other noise reduction techniques.

  16. STRCMACS: An extensive set of Macros for structured programming in OS/360 assembly language

    NASA Technical Reports Server (NTRS)

    Barth, C. W.

    1974-01-01

    Two techniques are discussed that have been most often referred to as structured programming. One is that of programming with high level control structures (such as the if and while) replacing the branch instruction (goto-less programming); the other is the process of developing a program by progressively refining descriptions of components in terms of more primitive components (called stepwise refinement or top-down programming). In addition to discussing what these techniques are, it is shown why their use is advised and how both can be implemented in OS assembly language by the use of a special macro instruction package.

  17. Simultaneous multi-laser, multi-species trace-level sensing of gas mixtures by rapidly swept continuous-wave cavity-ringdown spectroscopy.

    PubMed

    He, Yabai; Kan, Ruifeng; Englich, Florian V; Liu, Wenqing; Orr, Brian J

    2010-09-13

    The greenhouse-gas molecules CO(2), CH(4), and H(2)O are detected in air within a few ms by a novel cavity-ringdown laser-absorption spectroscopy technique using a rapidly swept optical cavity and multi-wavelength coherent radiation from a set of pre-tuned near-infrared diode lasers. The performance of various types of tunable diode laser, on which this technique depends, is evaluated. Our instrument is both sensitive and compact, as needed for reliable environmental monitoring with high absolute accuracy to detect trace concentrations of greenhouse gases in outdoor air.

  18. A multidomain approach to understanding risk for underage drinking: converging evidence from 5 data sets.

    PubMed

    Jones, Damon E; Feinberg, Mark E; Cleveland, Michael J; Cooper, Brittany Rhoades

    2012-11-01

    We examined the independent and combined influence of major risk and protective factors on youths' alcohol use. Five large data sets provided similar measures of alcohol use and risk or protective factors. We carried out analyses within each data set, separately for boys and girls in 8th and 10th grades. We included interaction and curvilinear predictive terms in final models if results were robust across data sets. We combined results using meta-analytic techniques. Individual, family, and peer risk factors and a community protective factor moderately predicted youths' alcohol use. Family and school protective factors did not predict alcohol use when combined with other factors. Youths' antisocial attitudes were more strongly associated with alcohol use for those also reporting higher levels of peer or community risk. For certain risk factors, the association with alcohol use varied across different risk levels. Efforts toward reducing youths' alcohol use should be based on robust estimates of the relative influence of risk and protective factors across adolescent environment domains. Public health advocates should focus on context (e.g., community factors) as a strategy for curbing underage alcohol use.

  19. Signature extension studies

    NASA Technical Reports Server (NTRS)

    Vincent, R. K.; Thomas, G. S.; Nalepka, R. F.

    1974-01-01

    The importance of specific spectral regions to signature extension is explored. In the recent past, the signature extension task was focused on the development of new techniques. Tested techniques are now used to investigate this spectral aspect of the large area survey. Sets of channels were sought which, for a given technique, were the least affected by several sources of variation over four data sets and yet provided good object class separation on each individual data set. Using sets of channels determined as part of this study, signature extension was accomplished between data sets collected over a six-day period and over a range of about 400 kilometers.

  20. Development of a digital video-microscopy technique to study lactose crystallisation kinetics in situ.

    PubMed

    Arellano, María Paz; Aguilera, José Miguel; Bouchon, Pedro

    2004-11-15

    Polarised light microscopy was employed non-invasively to monitor lactose crystallisation from non-seeded supersaturated solutions in real time. Images were continuously recorded, processed and characterised by image analysis, and the results were compared with those obtained by refractometry. Three crystallisation temperatures (10, 20 and 30 degrees C) and three different levels of initial relative supersaturation (C/C(s)=1.95; 2.34; 3.15) were investigated. Induction times using the imaging technique proved to be substantially lower than those determined using refractive index. Lactose crystals were isolated digitally to determine geometrical parameters of interest, such as perimeter, diameter, area, roundness and Feret mean, and to derive crystal growth rates. Mean growth rates obtained for single crystals were fitted to a combined mass transfer model (R(2)=0.9766). The model allowed the effects of temperature and supersaturation on crystallisation rate to be clearly identified. It also suggested that, in this set of experiments, surface integration seemed to be the rate controlling step. It is believed that a similar experimental set-up could be implemented in a real food system to characterise a particular process where crystallisation control is of interest and where traditional techniques are difficult to implement.

  1. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  2. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  3. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  4. [The requirements of standard and conditions of interchangeability of medical articles].

    PubMed

    Men'shikov, V V; Lukicheva, T I

    2013-11-01

    The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.

  5. Optimizing Sensor and Actuator Arrays for ASAC Noise Control

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran

    2000-01-01

    This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.

  6. Differences in sampling techniques on total post-mortem tryptase.

    PubMed

    Tse, R; Garland, J; Kesha, K; Elstub, H; Cala, A D; Ahn, Y; Stables, S; Palmiere, C

    2018-05-01

    The measurement of mast cell tryptase is commonly used to support the diagnosis of anaphylaxis. In the post-mortem setting, the literature recommends sampling from peripheral blood sources (femoral blood) but does not specify the exact sampling technique. Sampling techniques vary between pathologists, and it is unclear whether different sampling techniques have any impact on post-mortem tryptase levels. The aim of this study is to compare the difference in femoral total post-mortem tryptase levels between two sampling techniques. A 6-month retrospective study comparing femoral total post-mortem tryptase levels between (1) aspirating femoral vessels with a needle and syringe prior to evisceration and (2) femoral vein cut down during evisceration. Twenty cases were identified, with three cases excluded from analysis. There was a statistically significant difference (paired t test, p < 0.05) between mean post-mortem tryptase by aspiration (10.87 ug/L) and by cut down (14.15 ug/L). The mean difference between the two methods was 3.28 ug/L (median, 1.4 ug/L; min, - 6.1 ug/L; max, 16.5 ug/L; 95% CI, 0.001-6.564 ug/L). Femoral total post-mortem tryptase is significantly different, albeit by a small amount, between the two sampling methods. The clinical significance of this finding and what factors may contribute to it are unclear. When requesting post-mortem tryptase, the pathologist should consider documenting the exact blood collection site and method used for collection. In addition, blood samples acquired by different techniques should not be mixed together and should be analyzed separately if possible.

  7. Using Optimisation Techniques to Granulise Rough Set Partitions

    NASA Astrophysics Data System (ADS)

    Crossingham, Bodie; Marwala, Tshilidzi

    2007-11-01

    This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.

  8. A novel fully automatic multilevel thresholding technique based on optimized intuitionistic fuzzy sets and tsallis entropy for MR brain tumor image segmentation.

    PubMed

    Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita

    2018-03-01

    In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.

  9. Small Animal Massage Therapy: A Brief Review and Relevant Observations.

    PubMed

    Formenton, Maira Rezende; Pereira, Marco Aurélio Amador; Fantoni, Denise Tabacchi

    2017-12-01

    Massage therapy is becoming increasingly popular in human and animal physiotherapy and rehabilitation. Wider application of the technique led to research efforts aimed at providing scientific support to anecdotal beneficial effects, particularly pain relief. Recent studies have shown that massage therapy alters dopamine and serotonin levels, decreases noradrenaline levels, and modulates the immune system. Psychological effects such as reduction of stress and anxiety, with improvement of depressive patients, have been reported in humans. This article set out to review the major aspects of massage therapy based on recent publications on the topic, and to extrapolate concepts and practical aspects described in human physiotherapy to the veterinary patient, particularly the applicability of different techniques in Small Animal Medicine. Indications of massage therapy in small animals include pain relief, orthopedic rehabilitation, Canine Sports Medicine, intensive care, and management of nonspecific edema. Techniques described in this article were originally intended for use in humans and scientific data supporting anecdotal, beneficial effects in domestic animals are still lacking; this fruitful area for research is therefore open to veterinary professionals. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. [Consensus document on ultrasound training in Intensive Care Medicine. Care process, use of the technique and acquisition of professional skills].

    PubMed

    Ayuela Azcárate, J M; Clau-Terré, F; Vicho Pereira, R; Guerrero de Mier, M; Carrillo López, A; Ochagavia, A; López Pérez, J M; Trenado Alvarez, J; Pérez, L; Llompart-Pou, J A; González de Molina, F J; Fojón, S; Rodríguez Salgado, A; Martínez Díaz, M C; Royo Villa, C; Romero Bermejo, F J; Ruíz Bailén, M; Arroyo Díez, M; Argueso García, M; Fernández Fernández, J L

    2014-01-01

    Ultrasound has become an essential tool in assisting critically ill patients. His knowledge, use and instruction requires a statement by scientific societies involved in its development and implementation. Our aim are to determine the use of the technique in intensive care medicine, clinical situations where its application is recommended, levels of knowledge, associated responsibility and learning process also implement the ultrasound technique as a common tool in all intensive care units, similar to the rest of european countries. The SEMICYUC's Working Group Cardiac Intensive Care and CPR establishes after literature review and scientific evidence, a consensus document which sets out the requirements for accreditation in ultrasound applied to the critically ill patient and how to acquire the necessary skills. Training and learning requires a structured process within the specialty. The SEMICYUC must agree to disclose this document, build relationships with other scientific societies and give legal cover through accreditation of the training units, training courses and different levels of training. Copyright © 2013 Elsevier España, S.L. y SEMICYUC. All rights reserved.

  11. Multi-stage ranking of emergency technology alternatives for water source pollution accidents using a fuzzy group decision making tool.

    PubMed

    Qu, Jianhua; Meng, Xianlin; You, Hong

    2016-06-05

    Due to the increasing number of unexpected water source pollution events, selection of the most appropriate disposal technology for a specific pollution scenario is of crucial importance to the security of urban water supplies. However, the formulation of the optimum option is considerably difficult owing to the substantial uncertainty of such accidents. In this research, a multi-stage technical screening and evaluation tool is proposed to determine the optimal technique scheme, considering the areas of pollutant elimination both in drinking water sources and water treatment plants. In stage 1, a CBR-based group decision tool was developed to screen available technologies for different scenarios. Then, the threat degree caused by the pollution was estimated in stage 2 using a threat evaluation system and was partitioned into four levels. For each threat level, a corresponding set of technique evaluation criteria weights was obtained using Group-G1. To identify the optimization alternatives corresponding to the different threat levels, an extension of TOPSIS, a multi-criteria interval-valued trapezoidal fuzzy decision making technique containing the four arrays of criteria weights, to a group decision environment was investigated in stage 3. The effectiveness of the developed tool was elaborated by two actual thallium-contaminated scenarios associated with different threat levels. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Job Analysis Techniques for Restructuring Health Manpower Education and Training in the Navy Medical Department. Attachment 4. Clinic QPCB Task Sort for Clinical Physician Assistants--Dermatology, ENT, Opththalmology, Orthopedics, and Urology.

    ERIC Educational Resources Information Center

    Technomics, Inc., McLean, VA.

    This publication is Attachment 4 of a set of 16 computer listed QPCB task sorts, by career level, for the entire Hospital Corps and Dental Technician fields. Statistical data are presented in tabular form for a detailed listing of job duties for clinical physician assistants. (BT)

  13. The Shock and Vibration Digest. Volume 17. Number 5

    DTIC Science & Technology

    1985-05-01

    Prediction of these frequencies from acoustic source models remains open. Measurements have shown that the sound power radiated by a saw is proportional to...which appears to be metallurgically oriented, the chapter discusses experimental techniques, models of acoustic emission, and effects of mate...coefficient facili- tates calculation of A-weighted sound pres- sure levels in rooms. Thus, for modeling the acoustic field only one set of calcula

  14. CDC to CRAY FORTRAN conversion manual

    NASA Technical Reports Server (NTRS)

    Mcgary, C.; Diebert, D.

    1983-01-01

    Documentation describing software differences between two general purpose computers for scientific applications is presented. Descriptions of the use of the FORTRAN and FORTRAN 77 high level programming language on a CDC 7600 under SCOPE and a CRAY XMP under COS are offered. Itemized differences of the FORTRAN language sets of the two machines are also included. The material is accompanied by numerous examples of preferred programming techniques for the two machines.

  15. Job Analysis Techniques for Restructuring Health Manpower Education and Training in the Navy Medical Department. Attachment 5. Biotronics QPCB Task Sort for Cardio-Pulmonary, EEG, EKG, Inhalation Therapy.

    ERIC Educational Resources Information Center

    Technomics, Inc., McLean, VA.

    This publication is Attachment 5 of a set of 16 computer listed QPCB task sorts, by career level, for the entire Hospital Corps and Dental Technician fields. Statistical data are presented in tabular form for a detailed listing of job duties in cardio-pulmonary, EEG, EKG, and inhalation therapy. (BT)

  16. Advanced Protected Services: A Concept Paper on Survivable Service-Oriented Systems

    DTIC Science & Technology

    2010-05-07

    resiliency and protection of such systems to a level where they can withstand sustained attacks from well-motivated adversaries. In this paper we...that are designed for the protection of systems that are based on service-oriented architectures. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF...resilient against malicious attacks , and to demonstrate the utility of the developed advanced protection techniques in settings that exhibit various

  17. Target-in-the-loop high-power adaptive phase-locked fiber laser array using single-frequency dithering technique

    NASA Astrophysics Data System (ADS)

    Tao, R.; Ma, Y.; Si, L.; Dong, X.; Zhou, P.; Liu, Z.

    2011-11-01

    We present a theoretical and experimental study of a target-in-the-loop (TIL) high-power adaptive phase-locked fiber laser array. The system configuration of the TIL adaptive phase-locked fiber laser array is introduced, and the fundamental theory for TIL based on the single-dithering technique is deduced for the first time. Two 10-W-level high-power fiber amplifiers are set up and adaptive phase locking of the two fiber amplifiers is accomplished successfully by implementing a single-dithering algorithm on a signal processor. The experimental results demonstrate that the optical phase noise for each beam channel can be effectively compensated by the TIL adaptive optics system under high-power applications and the fringe contrast on a remotely located extended target is advanced from 12% to 74% for the two 10-W-level fiber amplifiers.

  18. Ultralow-power electronics for biomedical applications.

    PubMed

    Chandrakasan, Anantha P; Verma, Naveen; Daly, Denis C

    2008-01-01

    The electronics of a general biomedical device consist of energy delivery, analog-to-digital conversion, signal processing, and communication subsystems. Each of these blocks must be designed for minimum energy consumption. Specific design techniques, such as aggressive voltage scaling, dynamic power-performance management, and energy-efficient signaling, must be employed to adhere to the stringent energy constraint. The constraint itself is set by the energy source, so energy harvesting holds tremendous promise toward enabling sophisticated systems without straining user lifestyle. Further, once harvested, efficient delivery of the low-energy levels, as well as robust operation in the aggressive low-power modes, requires careful understanding and treatment of the specific design limitations that dominate this realm. We outline the performance and power constraints of biomedical devices, and present circuit techniques to achieve complete systems operating down to power levels of microwatts. In all cases, approaches that leverage advanced technology trends are emphasized.

  19. Traverse Planning Experiments for Future Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Hoffman, Stephen J.; Voels, Stephen A.; Mueller, Robert P.; Lee, Pascal C.

    2012-01-01

    The purpose of the investigation is to evaluate methodology and data requirements for remotely-assisted robotic traverse of extraterrestrial planetary surface to support human exploration program, assess opportunities for in-transit science operations, and validate landing site survey and selection techniques during planetary surface exploration mission analog demonstration at Haughton Crater on Devon Island, Nunavut, Canada. Additionally, 1) identify quality of remote observation data sets (i.e., surface imagery from orbit) required for effective pre-traverse route planning and determine if surface level data (i.e., onboard robotic imagery or other sensor data) is required for a successful traverse, and if additional surface level data can improve traverse efficiency or probability of success (TRPF Experiment). 2) Evaluate feasibility and techniques for conducting opportunistic science investigations during this type of traverse. (OSP Experiment). 3) Assess utility of remotely-assisted robotic vehicle for landing site validation survey. (LSV Experiment).

  20. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  1. Development of a Trip Energy Estimation Model Using Real-World Global Positioning System Driving Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holden, Jacob; Wood, Eric W; Zhu, Lei

    A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Futuremore » Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.« less

  2. A New Ghost Cell/Level Set Method for Moving Boundary Problems: Application to Tumor Growth

    PubMed Central

    Macklin, Paul

    2011-01-01

    In this paper, we present a ghost cell/level set method for the evolution of interfaces whose normal velocity depend upon the solutions of linear and nonlinear quasi-steady reaction-diffusion equations with curvature-dependent boundary conditions. Our technique includes a ghost cell method that accurately discretizes normal derivative jump boundary conditions without smearing jumps in the tangential derivative; a new iterative method for solving linear and nonlinear quasi-steady reaction-diffusion equations; an adaptive discretization to compute the curvature and normal vectors; and a new discrete approximation to the Heaviside function. We present numerical examples that demonstrate better than 1.5-order convergence for problems where traditional ghost cell methods either fail to converge or attain at best sub-linear accuracy. We apply our techniques to a model of tumor growth in complex, heterogeneous tissues that consists of a nonlinear nutrient equation and a pressure equation with geometry-dependent jump boundary conditions. We simulate the growth of glioblastoma (an aggressive brain tumor) into a large, 1 cm square of brain tissue that includes heterogeneous nutrient delivery and varied biomechanical characteristics (white matter, gray matter, cerebrospinal fluid, and bone), and we observe growth morphologies that are highly dependent upon the variations of the tissue characteristics—an effect observed in real tumor growth. PMID:21331304

  3. Advanced optical smoke meters for jet engine exhaust measurement

    NASA Technical Reports Server (NTRS)

    Pitz, R. W.

    1986-01-01

    Smoke meters with increased sensitivity, improved accuracy, and rapid response are needed to measure the smoke levels emitted by modern jet engines. The standard soiled tape meter in current use is based on filtering, which yields long term averages and is insensitive to low smoke levels. Two new optical smoke meter techniques that promise to overcome these difficulties have been experimentally evaluated: modulated transmission (MODTRAN) and photothermal deflection spectroscopy (PDS). Both techniques are based on light absorption by smoke, which is closely related to smoke density. They are variations on direct transmission measurements which produce a modulated signal that can be easily measured with phase sensitive detection. The MODTRAN and PDS techniques were tested on low levels of smoke and diluted samples of NO2 in nitrogen, simulating light adsorption due to smoke. The results are evaluated against a set of ideal smoke meter criteria that include a desired smoke measurement range of 0.1 to 12 mg cu.m. (smoke numbers of 1 to 50) and a frequency response of 1 per second. The MODTRAN instrument is found to be inaccurate for smoke levels below 3 mg/cu.m. and is able to make a only about once every 20 seconds because of its large sample cell. The PDS instrument meets nearly all the characteristics of an ideal smoke meter: it has excellent sensitivity over a range of smoke levels from 0.1 to 20 mg/cu.m. (smoke numbers of 1 to 60) and good frequency response (1 per second).

  4. Setup for in situ deep level transient spectroscopy of semiconductors during swift heavy ion irradiation.

    PubMed

    Kumar, Sandeep; Kumar, Sugam; Katharria, Y S; Safvan, C P; Kanjilal, D

    2008-05-01

    A computerized system for in situ deep level characterization during irradiation in semiconductors has been set up and tested in the beam line for materials science studies of the 15 MV Pelletron accelerator at the Inter-University Accelerator Centre, New Delhi. This is a new facility for in situ irradiation-induced deep level studies, available in the beam line of an accelerator laboratory. It is based on the well-known deep level transient spectroscopy (DLTS) technique. High versatility for data manipulation is achieved through multifunction data acquisition card and LABVIEW. In situ DLTS studies of deep levels produced by impact of 100 MeV Si ions on Aun-Si(100) Schottky barrier diode are presented to illustrate performance of the automated DLTS facility in the beam line.

  5. Co-culture systems and technologies: taking synthetic biology to the next level

    PubMed Central

    Goers, Lisa; Freemont, Paul; Polizzi, Karen M.

    2014-01-01

    Co-culture techniques find myriad applications in biology for studying natural or synthetic interactions between cell populations. Such techniques are of great importance in synthetic biology, as multi-species cell consortia and other natural or synthetic ecology systems are widely seen to hold enormous potential for foundational research as well as novel industrial, medical and environmental applications with many proof-of-principle studies in recent years. What is needed for co-cultures to fulfil their potential? Cell–cell interactions in co-cultures are strongly influenced by the extracellular environment, which is determined by the experimental set-up, which therefore needs to be given careful consideration. An overview of existing experimental and theoretical co-culture set-ups in synthetic biology and adjacent fields is given here, and challenges and opportunities involved in such experiments are discussed. Greater focus on foundational technology developments for co-cultures is needed for many synthetic biology systems to realize their potential in both applications and answering biological questions. PMID:24829281

  6. Technical support for creating an artificial intelligence system for feature extraction and experimental design

    NASA Technical Reports Server (NTRS)

    Glick, B. J.

    1985-01-01

    Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.

  7. Reference Data for the Density, Viscosity, and Surface Tension of Liquid Al-Zn, Ag-Sn, Bi-Sn, Cu-Sn, and Sn-Zn Eutectic Alloys

    NASA Astrophysics Data System (ADS)

    Dobosz, Alexandra; Gancarz, Tomasz

    2018-03-01

    The data for the physicochemical properties viscosity, density, and surface tension obtained by different experimental techniques have been analyzed for liquid Al-Zn, Ag-Sn, Bi-Sn, Cu-Sn, and Sn-Zn eutectic alloys. All experimental data sets have been categorized and described by the year of publication, the technique used to obtain the data, the purity of the samples and their compositions, the quoted uncertainty, the number of data in the data set, the form of data, and the temperature range. The proposed standard deviations of liquid eutectic Al-Zn, Ag-Sn, Bi-Sn, Cu-Sn, and Sn-Zn alloys are 0.8%, 0.1%, 0.5%, 0.2%, and 0.1% for the density, 8.7%, 4.1%, 3.6%, 5.1%, and 4.0% for viscosity, and 1.0%, 0.5%, 0.3%, N/A, and 0.4% for surface tension, respectively, at a confidence level of 95%.

  8. Proteomic data analysis of glioma cancer stem-cell lines based on novel nonlinear dimensional data reduction techniques

    NASA Astrophysics Data System (ADS)

    Lespinats, Sylvain; Pinker-Domenig, Katja; Wengert, Georg; Houben, Ivo; Lobbes, Marc; Stadlbauer, Andreas; Meyer-Bäse, Anke

    2016-05-01

    Glioma-derived cancer stem cells (GSCs) are tumor-initiating cells and may be refractory to radiation and chemotherapy and thus have important implications for tumor biology and therapeutics. The analysis and interpretation of large proteomic data sets requires the development of new data mining and visualization approaches. Traditional techniques are insufficient to interpret and visualize these resulting experimental data. The emphasis of this paper lies in the application of novel approaches for the visualization, clustering and projection representation to unveil hidden data structures relevant for the accurate interpretation of biological experiments. These qualitative and quantitative methods are applied to the proteomic analysis of data sets derived from the GSCs. The achieved clustering and visualization results provide a more detailed insight into the protein-level fold changes and putative upstream regulators for the GSCs. However the extracted molecular information is insufficient in classifying GSCs and paving the pathway to an improved therapeutics of the heterogeneous glioma.

  9. The stress response system of proteins: Implications for bioreactor scaleup

    NASA Technical Reports Server (NTRS)

    Goochee, Charles F.

    1988-01-01

    Animal cells face a variety of environmental stresses in large scale bioreactors, including periodic variations in shear stress and dissolved oxygen concentration. Diagnostic techniques were developed for identifying the particular sources of environmental stresses for animal cells in a given bioreactor configuration. The mechanisms by which cells cope with such stresses was examined. The individual concentrations and synthesis rates of hundreds of intracellular proteins are affected by the extracellular environment (medium composition, dissolved oxygen concentration, ph, and level of surface shear stress). Techniques are currently being developed for quantifying the synthesis rates and concentrations of the intracellular proteins which are most sensitive to environmental stress. Previous research has demonstrated that a particular set of stress response proteins are synthesized by mammalian cells in response to temperature fluctuations, dissolved oxygen deprivation, and glucose deprivation. Recently, it was demonstrated that exposure of human kidney cells to high shear stress results in expression of a completely distinct set of intracellular proteins.

  10. Day case laparoscopic nephrectomy with vaginal extraction: initial experience.

    PubMed

    Baldini, Arnaud; Golfier, François; Mouloud, Khaled; Bruge Ansel, Marie-Hélène; Navarro, Rémi; Ruffion, Alain; Paparel, Philippe

    2014-12-01

    To assess the feasibility of laparoscopic nephrectomy with vaginal extraction in an ambulatory setting. Two patients underwent a laparoscopic (1 was robot assisted) nephrectomy with vaginal extraction for a nonfunctioning kidney in an ambulatory setting. Both interventions were performed by the same surgical team comprising a urologic surgeon and a gynecologic surgeon. The operative specimen was vaginally extracted via an incision in the posterior fornix at the end of the intervention. Patients had to respect very strict socioenvironmental and clinical criteria. Anesthesia was achieved using short-acting agents. Only first- and second-step analgesics were used (morphine-free protocol). The main judgment criteria were visual analog scale assessment for postoperative pain, the Clavien-Dindo classification for surgical complications, and the hospital readmission rate. Two female patients (37 and 41 years old) have been successfully operated with this technique. No major perioperative or postoperative complications (Clavien-Dindo grade >2) were reported, and no patient readmission was required. Postoperative pain was well managed with visual analog scale scores ≤ 5. Both patients operated in the ambulatory setting had Chung scores of 10 before their discharge. Laparoscopic or robotic nephrectomy with vaginal extraction can be performed in an ambulatory setting in carefully selected patients. The association of fast-track surgical techniques and vaginal extraction by eliminating the abdominal wound extraction source of postoperative pain allows performing this operation in this setting with a high level of satisfaction. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Computer animation challenges for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine

    2012-07-01

    Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.

  12. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  13. A faster technique for rendering meshes in multiple display systems

    NASA Astrophysics Data System (ADS)

    Hand, Randall E.; Moorhead, Robert J., II

    2003-05-01

    Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.

  14. AIR Model Preflight Analysis

    NASA Technical Reports Server (NTRS)

    Tai, H.; Wilson, J. W.; Maiden, D. L.

    2003-01-01

    The atmospheric ionizing radiation (AIR) ER-2 preflight analysis, one of the first attempts to obtain a relatively complete measurement set of the high-altitude radiation level environment, is described in this paper. The primary thrust is to characterize the atmospheric radiation and to define dose levels at high-altitude flight. A secondary thrust is to develop and validate dosimetric techniques and monitoring devices for protecting aircrews. With a few chosen routes, we can measure the experimental results and validate the AIR model predictions. Eventually, as more measurements are made, we gain more understanding about the hazardous radiation environment and acquire more confidence in the prediction models.

  15. A theoretical and experimental study of wood planer noise and its control

    NASA Technical Reports Server (NTRS)

    Stewart, J. S.

    1972-01-01

    A combined analytical and experimental study of wood planer noise is made and the results applied to the development of practical noise control techniques. The dominant mechanisms of sound generation are identified and an analysis is presented which accurately predicts the governing levels of noise emission. Planing operations in which the length of the board is much greater than the width are considered. The dominant source of planer noise is identified as the board being surfaced, which is set into vibration by the impact of cutterhead knives. This is determined from studies made both in the laboratory and in the field concerning the effect of board width on the resulting noise, which indicate a six decibel increase in noise level for each doubling of board width. The theoretical development of a model for board vibration defines the vibrational field set up in the board and serves as a guide for cutterhead redesign.

  16. Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions

    NASA Technical Reports Server (NTRS)

    Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.

    1996-01-01

    NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.

  17. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  18. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    DOE PAGES

    Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...

    2015-07-15

    GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.

  19. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Mark James; Murtola, Teemu; Schulz, Roland

    GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.

  20. Tobacco use induces anti-apoptotic, proliferative patterns of gene expression in circulating leukocytes of Caucasian males

    PubMed Central

    Charles, Peter C; Alder, Brian D; Hilliard, Eleanor G; Schisler, Jonathan C; Lineberger, Robert E; Parker, Joel S; Mapara, Sabeen; Wu, Samuel S; Portbury, Andrea; Patterson, Cam; Stouffer, George A

    2008-01-01

    Background Strong epidemiologic evidence correlates tobacco use with a variety of serious adverse health effects, but the biological mechanisms that produce these effects remain elusive. Results We analyzed gene transcription data to identify expression spectra related to tobacco use in circulating leukocytes of 67 Caucasian male subjects. Levels of cotinine, a nicotine metabolite, were used as a surrogate marker for tobacco exposure. Significance Analysis of Microarray and Gene Set Analysis identified 109 genes in 16 gene sets whose transcription levels were differentially regulated by nicotine exposure. We subsequently analyzed this gene set by hyperclustering, a technique that allows the data to be clustered by both expression ratio and gene annotation (e.g. Gene Ontologies). Conclusion Our results demonstrate that tobacco use affects transcription of groups of genes that are involved in proliferation and apoptosis in circulating leukocytes. These transcriptional effects include a repertoire of transcriptional changes likely to increase the incidence of neoplasia through an altered expression of genes associated with transcription and signaling, interferon responses and repression of apoptotic pathways. PMID:18710571

  1. Ab initio structural and spectroscopic study of HPS{sup x} and HSP{sup x} (x = 0,+1,−1) in the gas phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaghlane, Saida Ben; Cotton, C. Eric; Francisco, Joseph S., E-mail: francisc@purdue.edu, E-mail: hochlaf@univ-mlv.fr

    2013-11-07

    Accurate ab initio computations of structural and spectroscopic parameters for the HPS/HSP molecules and corresponding cations and anions have been performed. For the electronic structure computations, standard and explicitly correlated coupled cluster techniques in conjunction with large basis sets have been adopted. In particular, we present equilibrium geometries, rotational constants, harmonic vibrational frequencies, adiabatic ionization energies, electron affinities, and, for the neutral species, singlet-triplet relative energies. Besides, the full-dimensional potential energy surfaces (PESs) for HPS{sup x} and HSP{sup x} (x = −1,0,1) systems have been generated at the standard coupled cluster level with a basis set of augmented quintuple-zeta quality.more » By applying perturbation theory to the calculated PESs, an extended set of spectroscopic constants, including τ, first-order centrifugal distortion and anharmonic vibrational constants has been obtained. In addition, the potentials have been used in a variational approach to deduce the whole pattern of vibrational levels up to 4000 cm{sup −1} above the minima of the corresponding PESs.« less

  2. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI).

    PubMed

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-07-07

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.

  3. Handling Imbalanced Data Sets in Multistage Classification

    NASA Astrophysics Data System (ADS)

    López, M.

    Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.

  4. Transfer printing techniques for materials assembly and micro/nanodevice fabrication.

    PubMed

    Carlson, Andrew; Bowen, Audrey M; Huang, Yonggang; Nuzzo, Ralph G; Rogers, John A

    2012-10-09

    Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  6. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  7. Gender identification of Grasshopper Sparrows comparing behavioral, morphological, and molecular techniques

    USGS Publications Warehouse

    Ammer, F.K.; Wood, P.B.; McPherson, R.J.

    2008-01-01

    Correct gender identification in monomorphic species is often difficult especially if males and females do not display obvious behavioral and breeding differences. We compared gender specific morphology and behavior with recently developed DNA techniques for gender identification in the monomorphic Grasshopper Sparrow (Ammodramus savannarum). Gender was ascertained with DNA in 213 individuals using the 2550F/2718R primer set and 3% agarose gel electrophoresis. Field observations using behavior and breeding characteristics to identify gender matched DNA analyses with 100% accuracy for adult males and females. Gender was identified with DNA for all captured juveniles that did not display gender specific traits or behaviors in the field. The molecular techniques used offered a high level of accuracy and may be useful in studies of dispersal mechanisms and winter assemblage composition in monomorphic species.

  8. The effects of competition on efficiency of electricity generation: A post-PURPA analysis

    NASA Astrophysics Data System (ADS)

    Jordan, Paula Faye

    1998-10-01

    The central issue of this research is the effects increased market competition has on production efficiency. Specifically, the research focuses upon measuring the relative level of efficiency in the generation of electricity in 1978 and 1993. It is hypothesized that the Public Utilities Regulatory Policy Act (PURPA), passed by Congress in 1978, made progress toward achieving its legislative intent of increasing competition, and therefore increased efficiency, in the generation of electricity. The methodology used to measure levels of efficiency in this research is the stochastic statistical estimator with the functional form of the translog production function. The models are then estimated using the maximum likelihood estimating technique using plant level data of coal generating units in the U.S. for 1978 and 1993. Results from the estimation of these models indicate that: (a) For the technical efficiency measures, the 1978 data set out performed the 1993 data set for the OTE and OTE of Fuel measures; (b) the 1993 data set was relatively more efficient in the OTE of Capital and the OTE of Labor when compared to the 1978 data set; (c) The 1993 observations indicated a relatively greater level of efficiency over 1978 in the OAE, OAE of Fuel, and OAE of Capital measures; (d) The OAE of Labor measure findings supported the 1978 observations as more efficient when compared to the 1993 set of observations; (e) When looking at the top and bottom ranked sites within each data set, the results indicated that sites which were top or poor performers for the technical and allocative efficiency measures tended to be a top or poor performer for the overall, fuel, and capital measures. The sites that appeared as a top or poor performer of labor measures within the technical and allocative groups were often unique and didn't necessarily appear as a top or poor performer in the other efficiency measures.

  9. Validating a large geophysical data set: Experiences with satellite-derived cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie

    1992-01-01

    We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.

  10. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    PubMed Central

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2016-01-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach. PMID:27497546

  11. The Attitudinal and Cognitive Effects of Planetarium Integration in Teaching Selected Astronomical Concepts to Fourth, Fifth, and Sixth-Grade Students

    NASA Astrophysics Data System (ADS)

    Twiest, Mark Gilbert

    The purpose of this study was to investigate the attitudinal and cognitive differences among students in an astronomy curriculum which utilizes a planetarium in comparison to an astronomy curriculum which is presented solely in the classroom. The specific attitudes of interest in this study are those student attitudes toward science as a whole and toward astronomy in particular. Researcher developed attitude and astronomy achievement measures were developed and administered to 423 fourth, fifth, and sixth grade students in three schools. Pre and post test data were collected from students in this quasi-experimental design. Two way analysis of covariance techniques were used to analyze the independent variables of gender and treatment. Results revealed significant differences (p < .05) in student attitudes toward astronomy in the fourth grade favoring the control school students. No other significant differences in student attitudes were found with respect to treatment or gender. Achievement was analyzed at both the knowledge and comprehension levels. Significant differences (p <.05) in student achievement were found fourth and fifth grades with respect to knowledge level questions. In both instances these differences favored the control school setting. Significant differences (p <.05) in student achievement were found for comprehension level questions at every grade level. In this case the control school outperformed the experimental setting in the fourth and sixth grades. Fifth grade experimental setting students had significantly higher achievement on comprehension level questions. A significant interaction occurred between treatment and gender with respect to student attitudes toward astronomy in the fifth grade. A second significant interaction occurred between treatment and gender for knowledge level questions for sixth grade students. No other significant interactions were found between treatment and gender. Correlations between post test attitudes and achievement were also calculated for each grade level in both the control and experimental settings. Correlations remained below.2 in every instance except one. Post test correlations between attitudes overall and achievement in the control school's sixth grade were.54.

  12. Virtual Monoenergetic Images From a Novel Dual-Layer Spectral Detector Computed Tomography Scanner in Portal Venous Phase: Adjusted Window Settings Depending on Assessment Focus Are Essential for Image Interpretation.

    PubMed

    Hickethier, Tilman; Iuga, Andra-Iza; Lennartz, Simon; Hauger, Myriam; Byrtus, Jonathan; Luetkens, Julian A; Haneder, Stefan; Maintz, David; Doerner, Jonas

    We aimed to determine optimal window settings for conventional polyenergetic (PolyE) and virtual monoenergetic images (MonoE) derived from abdominal portal venous phase computed tomography (CT) examinations on a novel dual-layer spectral-detector CT (SDCT). From 50 patients, SDCT data sets MonoE at 40 kiloelectron volt as well as PolyE were reconstructed and best individual window width and level values manually were assessed separately for evaluation of abdominal arteries as well as for liver lesions. Via regression analysis, optimized individual values were mathematically calculated. Subjective image quality parameters, vessel, and liver lesion diameters were measured to determine influences of different W/L settings. Attenuation and contrast-to-noise values were significantly higher in MonoE compared with PolyE. Compared with standard settings, almost all adjusted W/L settings varied significantly and yielded higher subjective scoring. No differences were found between manually adjusted and mathematically calculated W/L settings. PolyE and MonoE from abdominal portal venous phase SDCT examinations require appropriate W/L settings depending on reconstruction technique and assessment focus.

  13. Application of short-data methods on extreme surge levels

    NASA Astrophysics Data System (ADS)

    Feng, X.

    2014-12-01

    Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.

  14. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  15. System Learning via Exploratory Data Analysis: Seeing Both the Forest and the Trees

    NASA Astrophysics Data System (ADS)

    Habash Krause, L.

    2014-12-01

    As the amount of observational Earth and Space Science data grows, so does the need for learning and employing data analysis techniques that can extract meaningful information from those data. Space-based and ground-based data sources from all over the world are used to inform Earth and Space environment models. However, with such a large amount of data comes a need to organize those data in a way such that trends within the data are easily discernible. This can be tricky due to the interaction between physical processes that lead to partial correlation of variables or multiple interacting sources of causality. With the suite of Exploratory Data Analysis (EDA) data mining codes available at MSFC, we have the capability to analyze large, complex data sets and quantitatively identify fundamentally independent effects from consequential or derived effects. We have used these techniques to examine the accuracy of ionospheric climate models with respect to trends in ionospheric parameters and space weather effects. In particular, these codes have been used to 1) Provide summary "at-a-glance" surveys of large data sets through categorization and/or evolution over time to identify trends, distribution shapes, and outliers, 2) Discern the underlying "latent" variables which share common sources of causality, and 3) Establish a new set of basis vectors by computing Empirical Orthogonal Functions (EOFs) which represent the maximum amount of variance for each principal component. Some of these techniques are easily implemented in the classroom using standard MATLAB functions, some of the more advanced applications require the statistical toolbox, and applications to unique situations require more sophisiticated levels of programming. This paper will present an overview of the range of tools available and how they might be used for a variety of time series Earth and Space Science data sets. Examples of feature recognition from both 1D and 2D (e.g. imagery) time series data sets will be presented.

  16. An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms

    PubMed Central

    Castillo-Cara, Manuel; Lovón-Melgarejo, Jesús; Bravo-Rocca, Gusseppe; Orozco-Barbosa, Luis; García-Varea, Ismael

    2017-01-01

    Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon. PMID:28590413

  17. An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms.

    PubMed

    Castillo-Cara, Manuel; Lovón-Melgarejo, Jesús; Bravo-Rocca, Gusseppe; Orozco-Barbosa, Luis; García-Varea, Ismael

    2017-06-07

    Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon.

  18. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  19. Using Digital Radiography To Image Liquid Nitrogen in Voids

    NASA Technical Reports Server (NTRS)

    Cox, Dwight; Blevins, Elana

    2007-01-01

    Digital radiography by use of (1) a field-portable x-ray tube that emits low-energy x rays and (2) an electronic imaging x-ray detector has been found to be an effective technique for detecting liquid nitrogen inside voids in thermal-insulation panels. The technique was conceived as a means of investigating cryopumping (including cryoingestion) as a potential cause of loss of thermal insulation foam from space-shuttle external fuel tanks. The technique could just as well be used to investigate cryopumping and cryoingestion in other settings. In images formed by use of low-energy x-rays, one can clearly distinguish between voids filled with liquid nitrogen and those filled with gaseous nitrogen or other gases. Conventional film radiography is of some value, but yields only non-real-time still images that do not show time dependences of levels of liquids in voids. In contrast, the present digital radiographic technique yields a succession of images in real time at a rate of about 10 frames per second. The digitized images can be saved for subsequent analysis to extract data on time dependencies of levels of liquids and, hence, of flow paths and rates of filling and draining. The succession of images also amounts to a real-time motion picture that can be used as a guide to adjustment of test conditions.

  20. Atomically Precise Surface Engineering for Producing Imagers

    NASA Technical Reports Server (NTRS)

    Nikzad, Shouleh (Inventor); Hoenk, Michael E. (Inventor); Greer, Frank (Inventor); Jones, Todd J. (Inventor)

    2015-01-01

    High-quality surface coatings, and techniques combining the atomic precision of molecular beam epitaxy and atomic layer deposition, to fabricate such high-quality surface coatings are provided. The coatings made in accordance with the techniques set forth by the invention are shown to be capable of forming silicon CCD detectors that demonstrate world record detector quantum efficiency (>50%) in the near and far ultraviolet (155 nm-300 nm). The surface engineering approaches used demonstrate the robustness of detector performance that is obtained by achieving atomic level precision at all steps in the coating fabrication process. As proof of concept, the characterization, materials, and exemplary devices produced are presented along with a comparison to other approaches.

  1. Employing wavelet-based texture features in ammunition classification

    NASA Astrophysics Data System (ADS)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  2. Parallel gene analysis with allele-specific padlock probes and tag microarrays

    PubMed Central

    Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats

    2003-01-01

    Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977

  3. Spectral mapping of soil organic matter

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Baumgardner, M. F.; Johannsen, C. J.

    1974-01-01

    Multispectral remote sensing data were examined for use in the mapping of soil organic matter content. Computer-implemented pattern recognition techniques were used to analyze data collected in May 1969 and May 1970 by an airborne multispectral scanner over a 40-km flightline. Two fields within the flightline were selected for intensive study. Approximately 400 surface soil samples from these fields were obtained for organic matter analysis. The analytical data were used as training sets for computer-implemented analysis of the spectral data. It was found that within the geographical limitations included in this study, multispectral data and automatic data processing techniques could be used very effectively to delineate and map surface soils areas containing different levels of soil organic matter.

  4. [Present-day metal-cutting tools and working conditions].

    PubMed

    Kondratiuk, V P

    1990-01-01

    Polyfunctional machine-tools of a processing centre type are characterized by a set of hygienic advantages as compared to universal machine-tools. But low degree of mechanization and automation of some auxiliary processes, and constructional defects which decrease the ergonomic characteristics of the tools, involve labour intensity in multi-machine processing. The article specifies techniques of allowable noise level assessment, and proposes hygienic recommendations, some of which have been introduced into practice.

  5. Quantifying palpation techniques in relation to performance in a clinical prostate exam.

    PubMed

    Wang, Ninghuan; Gerling, Gregory J; Childress, Reba Moyer; Martin, Marcus L

    2010-07-01

    This paper seeks to quantify finger palpation techniques in the prostate clinical exam, determine their relationship with performance in detecting abnormalities, and differentiate the tendencies of nurse practitioner students and resident physicians. One issue with the digital rectal examination (DRE) is that performance in detecting abnormalities varies greatly and agreement between examiners is low. The utilization of particular palpation techniques may be one way to improve clinician ability. Based on past qualitative instruction, this paper algorithmically defines a set of palpation techniques for the DRE, i.e., global finger movement (GFM), local finger movement (LFM), and average intentional finger pressure, and utilizes a custom-built simulator to analyze finger movements in an experiment with two groups: 18 nurse practitioner students and 16 resident physicians. Although technique utilization varied, some elements clearly impacted performance. For example, those utilizing the LFM of vibration were significantly better at detecting abnormalities. Also, the V GFM led to greater success, but finger pressure played a lesser role. Interestingly, while the residents were clearly the superior performers, their techniques differed only subtly from the students. In summary, the quantified palpation techniques appear to account for examination ability at some level, but not entirely for differences between groups.

  6. A Reference Model for Software and System Inspections. White Paper

    NASA Technical Reports Server (NTRS)

    He, Lulu; Shull, Forrest

    2009-01-01

    Software Quality Assurance (SQA) is an important component of the software development process. SQA processes provide assurance that the software products and processes in the project life cycle conform to their specified requirements by planning, enacting, and performing a set of activities to provide adequate confidence that quality is being built into the software. Typical techniques include: (1) Testing (2) Simulation (3) Model checking (4) Symbolic execution (5) Management reviews (6) Technical reviews (7) Inspections (8) Walk-throughs (9) Audits (10) Analysis (complexity analysis, control flow analysis, algorithmic analysis) (11) Formal method Our work over the last few years has resulted in substantial knowledge about SQA techniques, especially the areas of technical reviews and inspections. But can we apply the same QA techniques to the system development process? If yes, what kind of tailoring do we need before applying them in the system engineering context? If not, what types of QA techniques are actually used at system level? And, is there any room for improvement.) After a brief examination of the system engineering literature (especially focused on NASA and DoD guidance) we found that: (1) System and software development process interact with each other at different phases through development life cycle (2) Reviews are emphasized in both system and software development. (Figl.3). For some reviews (e.g. SRR, PDR, CDR), there are both system versions and software versions. (3) Analysis techniques are emphasized (e.g. Fault Tree Analysis, Preliminary Hazard Analysis) and some details are given about how to apply them. (4) Reviews are expected to use the outputs of the analysis techniques. In other words, these particular analyses are usually conducted in preparation for (before) reviews. The goal of our work is to explore the interaction between the Quality Assurance (QA) techniques at the system level and the software level.

  7. Tools for the functional interpretation of metabolomic experiments.

    PubMed

    Chagoyen, Monica; Pazos, Florencio

    2013-11-01

    The so-called 'omics' approaches used in modern biology aim at massively characterizing the molecular repertories of living systems at different levels. Metabolomics is one of the last additions to the 'omics' family and it deals with the characterization of the set of metabolites in a given biological system. As metabolomic techniques become more massive and allow characterizing larger sets of metabolites, automatic methods for analyzing these sets in order to obtain meaningful biological information are required. Only recently the first tools specifically designed for this task in metabolomics appeared. They are based on approaches previously used in transcriptomics and other 'omics', such as annotation enrichment analysis. These, together with generic tools for metabolic analysis and visualization not specifically designed for metabolomics will for sure be in the toolbox of the researches doing metabolomic experiments in the near future.

  8. Performance measures for lower gastrointestinal endoscopy: a European Society of Gastrointestinal Endoscopy (ESGE) quality improvement initiative

    PubMed Central

    Thomas-Gibson, Siwan; Bugajski, Marek; Bretthauer, Michael; Rees, Colin J; Dekker, Evelien; Hoff, Geir; Jover, Rodrigo; Suchanek, Stepan; Ferlitsch, Monika; Anderson, John; Roesch, Thomas; Hultcranz, Rolf; Racz, Istvan; Kuipers, Ernst J; Garborg, Kjetil; East, James E; Rupinski, Maciej; Seip, Birgitte; Bennett, Cathy; Senore, Carlo; Minozzi, Silvia; Bisschops, Raf; Domagk, Dirk; Valori, Roland; Spada, Cristiano; Hassan, Cesare; Dinis-Ribeiro, Mario; Rutter, Matthew D

    2017-01-01

    The European Society of Gastrointestinal Endoscopy and United European Gastroenterology present a short list of key performance measures for lower gastrointestinal endoscopy. We recommend that endoscopy services across Europe adopt the following seven key performance measures for lower gastrointestinal endoscopy for measurement and evaluation in daily practice at a center and endoscopist level: 1 rate of adequate bowel preparation (minimum standard 90%); 2 cecal intubation rate (minimum standard 90%); 3 adenoma detection rate (minimum standard 25%); 4 appropriate polypectomy technique (minimum standard 80%); 5 complication rate (minimum standard not set); 6 patient experience (minimum standard not set); 7 appropriate post-polypectomy surveillance recommendations (minimum standard not set). Other identified performance measures have been listed as less relevant based on an assessment of their importance, scientific acceptability, feasibility, usability, and comparison to competing measures. PMID:28507745

  9. The Relationship between Structure-Related Food Parenting Practices and Children's Heightened Levels of Self-Regulation in Eating.

    PubMed

    Frankel, Leslie A; Powell, Elisabeth; Jansen, Elena

    Food parenting practices influence children's eating behaviors and weight status. Food parenting practices also influence children's self-regulatory abilities around eating, which has important implications for children's eating behaviors. The purpose of the following study is to examine use of structure-related food parenting practices and the potential impact on children's ability to self-regulate energy intake. Parents (n = 379) of preschool age children (M = 4.10 years, SD = 0.92) were mostly mothers (68.6%), Non-White (54.5%), and overweight/obese (50.1%). Hierarchical Multiple Regression was conducted to predict child self-regulation in eating from structure-related food parenting practices (structured meal setting, structured meal timing, family meal setting), while accounting for child weight status, parent age, gender, BMI, race, and yearly income. Hierarchical Multiple Regression results indicated that structure-related feeding practices (structured meal setting and family meal setting, but not structured meal timing) are associated with children's heightened levels of self-regulation in eating. Models examining the relationship within children who were normal weight and overweight/obese indicated the following: a relationship between structured meal setting and heightened self-regulation in eating for normal-weight children and a relationship between family meal setting and heightened self-regulation in eating for overweight/obese children. Researchers should further investigate these potentially modifiable parent feeding behaviors as a protective parenting technique, which possibly contributes to a healthy weight development by enhancing self-regulation in eating.

  10. Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.

    PubMed

    Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J

    2010-12-01

    Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Horne, William C.

    2015-01-01

    An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.

  12. Toward particle-level filtering of individual collision events at the Large Hadron Collider and beyond

    NASA Astrophysics Data System (ADS)

    Colecchia, Federico

    2014-03-01

    Low-energy strong interactions are a major source of background at hadron colliders, and methods of subtracting the associated energy flow are well established in the field. Traditional approaches treat the contamination as diffuse, and estimate background energy levels either by averaging over large data sets or by restricting to given kinematic regions inside individual collision events. On the other hand, more recent techniques take into account the discrete nature of background, most notably by exploiting the presence of substructure inside hard jets, i.e. inside collections of particles originating from scattered hard quarks and gluons. However, none of the existing methods subtract background at the level of individual particles inside events. We illustrate the use of an algorithm that will allow particle-by-particle background discrimination at the Large Hadron Collider, and we envisage this as the basis for a novel event filtering procedure upstream of the official reconstruction chains. Our hope is that this new technique will improve physics analysis when used in combination with state-of-the-art algorithms in high-luminosity hadron collider environments.

  13. Development of at-wavelength metrology for x-ray optics at the ALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V.; Goldberg, Kenneth A.; Yuan, Sheng

    2010-07-09

    The comprehensive realization of the exciting advantages of new third- and forth-generation synchrotron radiation light sources requires concomitant development of reflecting and diffractive x-ray optics capable of micro- and nano-focusing, brightness preservation, and super high resolution. The fabrication, tuning, and alignment of the optics are impossible without adequate metrology instrumentation, methods, and techniques. While the accuracy of ex situ optical metrology at the Advanced Light Source (ALS) has reached a state-of-the-art level, wavefront control on beamlines is often limited by environmental and systematic alignment factors, and inadequate in situ feedback. At ALS beamline 5.3.1, we are developing broadly applicable, high-accuracy,more » in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of tests with increasing accuracy and sensitivity. Geometric Hartmann tests, performed with a scanning illuminated sub-aperture determine the wavefront slope across the full mirror aperture. Shearing interferometry techniques use coherent illumination and provide higher sensitivity wavefront measurements. Combining these techniques with high precision optical metrology and experimental methods will enable us to provide in situ setting and alignment of bendable x-ray optics to realize diffraction-limited, sub 50 nm focusing at beamlines. We describe here details of the metrology beamline endstation, the x-ray beam diagnostic system, and original experimental techniques that have already allowed us to precisely set a bendable KB mirror to achieve a focused spot size of 150 nm.« less

  14. Repeatability and Accuracy of Exoplanet Eclipse Depths Measured with Post-cryogenic Spitzer

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Krick, J. E.; Carey, S. J.; Stauffer, John R.; Lowrance, Patrick J.; Grillmair, Carl J.; Buzasi, Derek; Deming, Drake; Diamond-Lowe, Hannah; Evans, Thomas M.; Morello, G.; Stevenson, Kevin B.; Wong, Ian; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa

    2016-08-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5 μm data set, consisting of 10 observations of the XO-3b system during secondary eclipse, using seven different techniques for removing correlated noise. We find that, on average, for a given technique, the eclipse depth estimate is repeatable from epoch to epoch to within 156 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 3 of the photon noise limit. All methods but one accurately assess their own errors: for these methods, the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10 epoch sample. To assess the accuracy of the techniques as well as to clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated data set of 10 visits to XO-3b, for which the eclipse depth is known. We find that three of the methods (BLISS mapping, Pixel Level Decorrelation, and Independent Component Analysis) obtain results that are within three times the photon limit of the true eclipse depth. When averaged over the 10 epoch ensemble, 5 out of 7 techniques come within 60 ppm of the true value. Spitzer exoplanet data, if obtained following current best practices and reduced using methods such as those described here, can measure repeatable and accurate single eclipse depths, with close to photon-limited results.

  15. Modeling the hyperpolarizability dispersion with the Thomas-Kuhn sum rules

    NASA Astrophysics Data System (ADS)

    De Mey, Kurt; Perez-Moreno, Javier; Clays, Koen

    2011-10-01

    The continued interest in molecules that possess large quadratic nonlinear optical (NLO) properties has motivated considerable interplay between molecular synthesis and theory. The screening of viable candidates for NLO applications has been a tedious work, much helped by the advent of the hyper-Rayleigh scattering (HRS) technique. The downside of this technique is the low efficiency, which usually means that measurements have to be performed at wavelengths that are close to the molecular resonances, in the visible area. This means generally that one has to extrapolate the results from HRS characterization to the longer wavelengths that are useful for applications. Such extrapolation is far from trivial and the classic 2-level model can only be used for the most straightforward single charge-transfer chromophores. An alternative is the TKSSOS technique, which uses a few input-hyperpolarizabilities and UV-Vis absorption data to calculate the entire hyperpolarizability spectrum. We have applied this TKS-SOS technique on a set of porphyrines to calculate the hyperpolarizability dispersion. We have also built a tunable HRS set up, capable of determining hyperpolarizabilities in the near infrared (up to 1600 nm). This has allowed us to directly confirm the results predicted in the application region. Due to the very sharp transitions in the hyperpolarizability dispersion, the calculation is subjected to a very precise calibration with respect to the input-hyperpolarizabilities, resulting in very accurate predictions for long wavelength hyperpolarizabilities. Our results not only underscribe the aforementioned technique, but also confirm the use of porphyrines as powerful moieties in NLO applications.

  16. Data Mining Techniques Applied to Hydrogen Lactose Breath Test.

    PubMed

    Rubio-Escudero, Cristina; Valverde-Fernández, Justo; Nepomuceno-Chamorro, Isabel; Pontes-Balanza, Beatriz; Hernández-Mendoza, Yoedusvany; Rodríguez-Herrera, Alfonso

    2017-01-01

    Analyze a set of data of hydrogen breath tests by use of data mining tools. Identify new patterns of H2 production. Hydrogen breath tests data sets as well as k-means clustering as the data mining technique to a dataset of 2571 patients. Six different patterns have been extracted upon analysis of the hydrogen breath test data. We have also shown the relevance of each of the samples taken throughout the test. Analysis of the hydrogen breath test data sets using data mining techniques has identified new patterns of hydrogen generation upon lactose absorption. We can see the potential of application of data mining techniques to clinical data sets. These results offer promising data for future research on the relations between gut microbiota produced hydrogen and its link to clinical symptoms.

  17. Study of the optimum level of electrode placement for the evaluation of absolute lung resistivity with the Mk3.5 EIT system.

    PubMed

    Nebuya, S; Noshiro, M; Yonemoto, A; Tateno, S; Brown, B H; Smallwood, R H; Milnes, P

    2006-05-01

    Inter-subject variability has caused the majority of previous electrical impedance tomography (EIT) techniques to focus on the derivation of relative or difference measures of in vivo tissue resistivity. Implicit in these techniques is the requirement for a reference or previously defined data set. This study assesses the accuracy and optimum electrode placement strategy for a recently developed method which estimates an absolute value of organ resistivity without recourse to a reference data set. Since this measurement of tissue resistivity is absolute, in Ohm metres, it should be possible to use EIT measurements for the objective diagnosis of lung diseases such as pulmonary oedema and emphysema. However, the stability and reproducibility of the method have not yet been investigated fully. To investigate these problems, this study used a Sheffield Mk3.5 system which was configured to operate with eight measurement electrodes. As a result of this study, the absolute resistivity measurement was found to be insensitive to the electrode level between 4 and 5 cm above the xiphoid process. The level of the electrode plane was varied between 2 cm and 7 cm above the xiphoid process. Absolute lung resistivity in 18 normal subjects (age 22.6 +/- 4.9, height 169.1 +/- 5.7 cm, weight 60.6 +/- 4.5 kg, body mass index 21.2 +/- 1.6: mean +/- standard deviation) was measured during both normal and deep breathing for 1 min. Three sets of measurements were made over a period of several days on each of nine of the normal male subjects. No significant differences in absolute lung resistivity were found, either during normal tidal breathing between the electrode levels of 4 and 5 cm (9.3 +/- 2.4 Omega m, 9.6 +/- 1.9 Omega m at 4 and 5 cm, respectively: mean +/- standard deviation) or during deep breathing between the electrode levels of 4 and 5 cm (10.9 +/- 2.9 Omega m and 11.1 +/- 2.3 Omega m, respectively: mean +/- standard deviation). However, the differences in absolute lung resistivity between normal and deep tidal breathing at the same electrode level are significant. No significant difference was found in the coefficient of variation between the electrode levels of 4 and 5 cm (9.5 +/- 3.6%, 8.5 +/- 3.2% at 4 and 5 cm, respectively: mean +/- standard deviation in individual subjects). Therefore, the electrode levels of 4 and 5 cm above the xiphoid process showed reasonable reliability in the measurement of absolute lung resistivity both among individuals and over time.

  18. Which behaviour change techniques are most effective at increasing older adults' self-efficacy and physical activity behaviour? A systematic review.

    PubMed

    French, David P; Olander, Ellinor K; Chisholm, Anna; Mc Sharry, Jennifer

    2014-10-01

    Increasing self-efficacy is an effective mechanism for increasing physical activity, especially for older people. The aim of this review was to identify behaviour change techniques (BCTs) that increase self-efficacy and physical activity behaviour in non-clinical community-dwelling adults 60 years or over. A systematic search identified 24 eligible studies reporting change in self-efficacy for physical activity following an intervention. Moderator analyses examined whether the inclusion of specific BCTs (as defined by CALO-RE taxonomy) was associated with changes in self-efficacy and physical activity behaviour. Overall, interventions increased self-efficacy (d = 0.37) and physical activity (d = 0.14). Self-regulatory techniques such as setting behavioural goals, prompting self-monitoring of behaviour, planning for relapses, providing normative information and providing feedback on performance were associated with lower levels of both self-efficacy and physical activity. Many commonly used self-regulation intervention techniques that are effective for younger adults may not be effective for older adults.

  19. Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.

    1997-01-01

    A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.

  20. An unsupervised classification technique for multispectral remote sensing data.

    NASA Technical Reports Server (NTRS)

    Su, M. Y.; Cummings, R. E.

    1973-01-01

    Description of a two-part clustering technique consisting of (a) a sequential statistical clustering, which is essentially a sequential variance analysis, and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum-likelihood classification techniques.

  1. Comparing Pattern Recognition Feature Sets for Sorting Triples in the FIRST Database

    NASA Astrophysics Data System (ADS)

    Proctor, D. D.

    2006-07-01

    Pattern recognition techniques have been used with increasing success for coping with the tremendous amounts of data being generated by automated surveys. Usually this process involves construction of training sets, the typical examples of data with known classifications. Given a feature set, along with the training set, statistical methods can be employed to generate a classifier. The classifier is then applied to process the remaining data. Feature set selection, however, is still an issue. This paper presents techniques developed for accommodating data for which a substantive portion of the training set cannot be classified unambiguously, a typical case for low-resolution data. Significance tests on the sort-ordered, sample-size-normalized vote distribution of an ensemble of decision trees is introduced as a method of evaluating relative quality of feature sets. The technique is applied to comparing feature sets for sorting a particular radio galaxy morphology, bent-doubles, from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) database. Also examined are alternative functional forms for feature sets. Associated standard deviations provide the means to evaluate the effect of the number of folds, the number of classifiers per fold, and the sample size on the resulting classifications. The technique also may be applied to situations for which, although accurate classifications are available, the feature set is clearly inadequate, but is desired nonetheless to make the best of available information.

  2. Capsule Performance Optimization for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Landen, Otto

    2009-11-01

    The overall goal of the capsule performance optimization campaign is to maximize the probability of ignition by experimentally correcting for likely residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. This will be accomplished using a variety of targets that will set key laser, hohlraum and capsule parameters to maximize ignition capsule implosion velocity, while minimizing fuel adiabat, core shape asymmetry and ablator-fuel mix. The targets include high Z re-emission spheres setting foot symmetry through foot cone power balance [1], liquid Deuterium-filled ``keyhole'' targets setting shock speed and timing through the laser power profile [2], symmetry capsules setting peak cone power balance and hohlraum length [3], and streaked x-ray backlit imploding capsules setting ablator thickness [4]. We will show how results from successful tuning technique demonstration shots performed at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design meet the required sensitivity and accuracy. We will also present estimates of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors, and show that these get reduced after a number of shots and iterations to meet an acceptable level of residual uncertainty. Finally, we will present results from upcoming tuning technique validation shots performed at NIF at near full-scale. Prepared by LLNL under Contract DE-AC52-07NA27344. [4pt] [1] E. Dewald, et. al. Rev. Sci. Instrum. 79 (2008) 10E903. [0pt] [2] T.R. Boehly, et. al., Phys. Plasmas 16 (2009) 056302. [0pt] [3] G. Kyrala, et. al., BAPS 53 (2008) 247. [0pt] [4] D. Hicks, et. al., BAPS 53 (2008) 2.

  3. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  4. ERT, GPR, InSAR, and tracer tests to characterize karst aquifer systems under urban areas: The case of Quebec City

    NASA Astrophysics Data System (ADS)

    Martel, Richard; Castellazzi, Pascal; Gloaguen, Erwan; Trépanier, Luc; Garfias, Jaime

    2018-06-01

    Urban infrastructures built over karst settings may be at risk of collapse due to hydro-chemical erosion of underlying rock structures. In such settings, mapping cave networks and monitoring ground stability is important to assure civil safety and guide future infrastructure development decisions. However, no technique can directly and comprehensively map these hydrogeological features and monitor their stability. The most reliable method to map a cave network is through speleological exploration, which is not always possible due to restrictions, narrow corridors/passages, or high water levels. Borehole drilling is expensive and is often only performed where the presence of karsts is suggested by other techniques. Numerous indirect and cost-effective methods exist to map a karst flow system, such as geophysics, geodesy, and tracer tests. This paper presents the outcomes from a challenging application in Quebec City, Canada, where a multidisciplinary approach was designed to better understand the groundwater dynamics and cave paths. Two tracer tests in groundwater flowing through the cave system indicated that water flows along an approximately straight path from the sinking stream to the spring. It also suggests the presence of a parallel flow path close to the one already partially mapped. This observation was confirmed by combining Ground Penetrating Radar (GPR) and Electrical Resistivity Tomography (ERT) techniques, and ultimately by observing voids in several boreholes drilled close to the main cave path. Lowering the water levels at the suspected infiltration zone and inside the karst, the infiltration cracks were identified and the hydraulic link between them was confirmed. In fact, almost no infiltration occurs into the karst system when the water level at the sinking stream drops below a threshold level. Finally, SAR interferometry (InSAR) using RADARSAT-2 images detected movements on few buildings located over a backfilled sinkhole intercepted by the karst system and confirmed the stability of the rest of the karst area. The knowledge of the flow system described in this paper is used by policy makers to assure civil security of this densely populated area.

  5. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2017-04-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  6. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    PubMed

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  7. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    PubMed

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.

  8. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  9. Variable mixer propulsion cycle

    NASA Technical Reports Server (NTRS)

    Rundell, D. J.; Mchugh, D. P.; Foster, T.; Brown, R. H. (Inventor)

    1978-01-01

    A design technique, method and apparatus are delineated for controlling the bypass gas stream pressure and varying the bypass ratio of a mixed flow gas turbine engine in order to achieve improved performance. The disclosed embodiments each include a mixing device for combining the core and bypass gas streams. The variable area mixing device permits the static pressures of the core and bypass streams to be balanced prior to mixing at widely varying bypass stream pressure levels. The mixed flow gas turbine engine therefore operates efficiently over a wide range of bypass ratios and the dynamic pressure of the bypass stream is maintained at a level which will keep the engine inlet airflow matched to an optimum design level throughout a wide range of engine thrust settings.

  10. Selection of Representative Models for Decision Analysis Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  11. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection †

    PubMed Central

    Delaney, Declan T.; O’Hare, Gregory M. P.

    2016-01-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929

  12. Behavioral management in children with intellectual disabilities in a resource-poor setting in Barwani, India

    PubMed Central

    Lakhan, Ram

    2014-01-01

    Background: Management of behavioral problems in children with intellectual disabilities (ID) is a great concern in resource-poor areas in India. This study attempted to analyze the efficacy of behavioral intervention provided in resource-poor settings. Objective: This study was aimed to examine the outcome of behavioral management provided to children with ID in a poor rural region in India. Materials and Methods: We analyzed data from 104 children between 3 and 18 years old who received interventions for behavioral problems in a clinical or a community setting. The behavioral assessment scale for Indian children with mental retardation (BASIC-MR) was used to quantify the study subjects’ behavioral problems before and after we applied behavioral management techniques (baseline and post-intervention, respectively). The baseline and post-intervention scores were analyzed using the following statistical techniques: Wilcoxon matched-pairs signed-rank test for the efficacy of intervention; χ2 for group differences. Results: The study demonstrated behavioral improvements across all behavior domains (P < 0.05). Levels of improvement varied for children with different severities of ID (P = 0.001), between children who did and did not have multiple disabilities (P = 0.011). Conclusion: The outcome of this behavioral management study suggests that behavioral intervention can be effectively provided to children with ID in poor areas. PMID:24574557

  13. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    PubMed

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  14. Principal Component Analysis for Enhancement of Infrared Spectra Monitoring

    NASA Astrophysics Data System (ADS)

    Haney, Ricky Lance

    The issue of air quality within the aircraft cabin is receiving increasing attention from both pilot and flight attendant unions. This is due to exposure events caused by poor air quality that in some cases may have contained toxic oil components due to bleed air that flows from outside the aircraft and then through the engines into the aircraft cabin. Significant short and long-term medical issues for aircraft crew have been attributed to exposure. The need for air quality monitoring is especially evident in the fact that currently within an aircraft there are no sensors to monitor the air quality and potentially harmful gas levels (detect-to-warn sensors), much less systems to monitor and purify the air (detect-to-treat sensors) within the aircraft cabin. The specific purpose of this research is to utilize a mathematical technique called principal component analysis (PCA) in conjunction with principal component regression (PCR) and proportionality constant calculations (PCC) to simplify complex, multi-component infrared (IR) spectra data sets into a reduced data set used for determination of the concentrations of the individual components. Use of PCA can significantly simplify data analysis as well as improve the ability to determine concentrations of individual target species in gas mixtures where significant band overlap occurs in the IR spectrum region. Application of this analytical numerical technique to IR spectrum analysis is important in improving performance of commercial sensors that airlines and aircraft manufacturers could potentially use in an aircraft cabin environment for multi-gas component monitoring. The approach of this research is two-fold, consisting of a PCA application to compare simulation and experimental results with the corresponding PCR and PCC to determine quantitatively the component concentrations within a mixture. The experimental data sets consist of both two and three component systems that could potentially be present as air contaminants in an aircraft cabin. In addition, experimental data sets are analyzed for a hydrogen peroxide (H2O2) aqueous solution mixture to determine H2O2 concentrations at various levels that could be produced during use of a vapor phase hydrogen peroxide (VPHP) decontamination system. After the PCA application to two and three component systems, the analysis technique is further expanded to include the monitoring of potential bleed air contaminants from engine oil combustion. Simulation data sets created from database spectra were utilized to predict gas components and concentrations in unknown engine oil samples at high temperatures as well as time-evolved gases from the heating of engine oils.

  15. Chain sampling plan (ChSP-1) for desired acceptable quality level (AQL) and limiting quality level (LQL)

    NASA Astrophysics Data System (ADS)

    Raju, C.; Vidya, R.

    2017-11-01

    Chain Sampling Plan is widely used whenever a small sample attributes plan is required to be used for situations involving destructive products coming out of continuous production process [1, 2]. This paper presents a procedure for the construction and selection of a ChSP-1 by attributes inspection based on membership functions [3]. A procedure using search technique is developed for obtaining the parameters of single sampling plan for a given set of AQL and LQL values. A sample of tables providing ChSP-1 plans for various combinations of AQL and LQL values are presented [4].

  16. Low-level luminescence of the human skin.

    PubMed

    Cohen, S; Popp, F A

    1997-08-01

    For the first time systematic measurements of the low-level luminescence of the human skin ("bi-ophotons") have been performed by means of a photon detector device set up in darkness. Several months of daily investigations have shown that body light emission follows definite and well-known biological rhythms, such as 14 days, 1 month, 3 months, 9 months, and reflects the left-right symmetry of the body. The results confirm that biophotons are related to physiological functions. This technique provides a new and powerful noninvasive diagnostic method. In particular, skin research and development may use it for testing the influence of different skin treatments.

  17. Setting Priorities: A Handbook of Alternative Techniques.

    ERIC Educational Resources Information Center

    Price, Nelson C.

    Six models for setting priorities are presented in a workbook format with exercises for evaluating or practicing five techniques. In the San Mateo model one sets priorities, clarifies priority purpose, lists items, determines criteria, lists items and criteria on a rating sheet, studies all information on items, rates each item, tallies results,…

  18. Situational Management, Standard Setting, and Self-Reward in a Behavior Modification Weight Loss Program.

    ERIC Educational Resources Information Center

    Chapman, Stanley L.; Jeffrey, D. Balfour

    1978-01-01

    In comprehensive wieght loss program, overweight women exposed to instruction in self-standard setting and to situational management techniques lost more weight than those instructed only in situational management techniques. Findings illustrate facilitative effect of teaching individuals to set specific, objective, and realistic goals for eating…

  19. Thematic mapper design parameter investigation

    NASA Technical Reports Server (NTRS)

    Colby, C. P., Jr.; Wheeler, S. G.

    1978-01-01

    This study simulated the multispectral data sets to be expected from three different Thematic Mapper configurations, and the ground processing of these data sets by three different resampling techniques. The simulated data sets were then evaluated by processing them for multispectral classification, and the Thematic Mapper configuration, and resampling technique which provided the best classification accuracy were identified.

  20. Image processing via level set curvature flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malladi, R.; Sethian, J.A.

    We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach. 15 ref., 6 figs.

  1. Ligand and structure-based methodologies for the prediction of the activity of G protein-coupled receptor ligands

    NASA Astrophysics Data System (ADS)

    Costanzi, Stefano; Tikhonova, Irina G.; Harden, T. Kendall; Jacobson, Kenneth A.

    2009-11-01

    Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.

  2. Mathematical model of a DIC position sensing system within an optical trap

    NASA Astrophysics Data System (ADS)

    Wulff, Kurt D.; Cole, Daniel G.; Clark, Robert L.

    2005-08-01

    The quantitative study of displacements and forces of motor proteins and processes that occur at the microscopic level and below require a high level of sensitivity. For optical traps, two techniques for position sensing have been accepted and used quite extensively: quadrant photodiodes and an interferometric position sensing technique based on DIC imaging. While quadrant photodiodes have been studied in depth and mathematically characterized, a mathematical characterization of the interferometric position sensor has not been presented to the authors' knowledge. The interferometric position sensing method works off of the DIC imaging capabilities of a microscope. Circularly polarized light is sent into the microscope and the Wollaston prism used for DIC imaging splits the beam into its orthogonal components, displacing them by a set distance determined by the user. The distance between the axes of the beams is set so the beams overlap at the specimen plane and effectively share the trapped microsphere. A second prism then recombines the light beams and the exiting laser light's polarization is measured and related to position. In this paper we outline the mathematical characterization of a microsphere suspended in an optical trap using a DIC position sensing method. The sensitivity of this mathematical model is then compared to the QPD model. The mathematical model of a microsphere in an optical trap can serve as a calibration curve for an experimental setup.

  3. Image smoothing and enhancement via min/max curvature flow

    NASA Astrophysics Data System (ADS)

    Malladi, Ravikanth; Sethian, James A.

    1996-03-01

    We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.

  4. Reducing and Analyzing the PHAT Survey with the Cloud

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Olsen, Knut; Khan, Rubab; Pirone, Daniel; Rosema, Keith

    2018-05-01

    We discuss the technical challenges we faced and the techniques we used to overcome them when reducing the Panchromatic Hubble Andromeda Treasury (PHAT) photometric data set on the Amazon Elastic Compute Cloud (EC2). We first describe the architecture of our photometry pipeline, which we found particularly efficient for reducing the data in multiple ways for different purposes. We then describe the features of EC2 that make this architecture both efficient to use and challenging to implement. We describe the techniques we adopted to process our data, and suggest ways these techniques may be improved for those interested in trying such reductions in the future. Finally, we summarize the output photometry data products, which are now hosted publicly in two places in two formats. They are in simple fits tables in the high-level science products on MAST, and on a queryable database available through the NOAO Data Lab.

  5. Systematic cloning of an ORFeome using the Gateway system.

    PubMed

    Matsuyama, Akihisa; Yoshida, Minoru

    2009-01-01

    With the completion of the genome projects, there are increasing demands on the experimental systems that enable to exploit the entire set of protein-coding open reading frames (ORFs), viz. ORFeome, en masse. Systematic proteomic studies based on cloned ORFeomes are called "reverse proteomics," and have been launched in many organisms in recent years. Cloning of an ORFeome is such an attractive way for comprehensive understanding of biological phenomena, but is a challenging and daunting task. However, recent advances in techniques for DNA cloning using site-specific recombination and for high-throughput experimental techniques have made it feasible to clone an ORFeome with the minimum of exertion. The Gateway system is one of such the approaches, employing the recombination reaction of the bacteriophage lambda. Combining traditional DNA manipulation methods with modern technique of the recombination-based cloning system, it is possible to clone an ORFeome of an organism on an individual level.

  6. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.

    PubMed

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-06-26

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.

  7. Setting up a Rayleigh Scattering Based Flow Measuring System in a Large Nozzle Testing Facility

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Gomez, Carlos R.

    2002-01-01

    A molecular Rayleigh scattering based air density measurement system has been built in a large nozzle testing facility at NASA Glenn Research Center. The technique depends on the light scattering by gas molecules present in air; no artificial seeding is required. Light from a single mode, continuous wave laser was transmitted to the nozzle facility by optical fiber, and light scattered by gas molecules, at various points along the laser beam, is collected and measured by photon-counting electronics. By placing the laser beam and collection optics on synchronized traversing units, the point measurement technique is made effective for surveying density variation over a cross-section of the nozzle plume. Various difficulties associated with dust particles, stray light, high noise level and vibration are discussed. Finally, a limited amount of data from an underexpanded jet are presented and compared with expected variations to validate the technique.

  8. Automatic classification of tissue malignancy for breast carcinoma diagnosis.

    PubMed

    Fondón, Irene; Sarmiento, Auxiliadora; García, Ana Isabel; Silvestre, María; Eloy, Catarina; Polónia, António; Aguiar, Paulo

    2018-05-01

    Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Comparison of intramedullary nailing and external fixation knee arthrodesis for the infected knee replacement.

    PubMed

    Mabry, Tad M; Jacofsky, David J; Haidukewych, George J; Hanssen, Arlen D

    2007-11-01

    We analyzed knee arthrodesis for the infected total knee replacement (TKR) using two different fixation techniques. Patients undergoing knee arthrodesis for infected TKR were identified and rates of successful fusion and recurrence of infection were compared using Cox proportional hazard models. Eighty-five consecutive patients who underwent knee arthrodesis were followed until union, nonunion, amputation, or death. External fixation achieved successful fusion in 41 of 61 patients and was associated with a 4.9% rate of deep infection. Fusion was successful in 23 of 24 patients with intramedullary (IM) nailing and was associated with an 8.3% rate of deep infection. We observed similar fusion and infection rates with the two techniques. Thirty-four patients (40%) had complications. Knee arthrodesis remains a reasonable salvage alternative for the difficult infected TKR. Complication rates are high irrespective of the technique, and one must consider the risks of both nonunion and infection when choosing the fixation method in this setting. IM nailing appears to have a higher rate of successful union but a higher risk of recurrent infection when compared with external fixation knee arthrodesis. Level III, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence.

  10. SC-GRAPPA: Self-constraint noniterative GRAPPA reconstruction with closed-form solution.

    PubMed

    Ding, Yu; Xue, Hui; Ahmad, Rizwan; Ting, Samuel T; Simonetti, Orlando P

    2012-12-01

    Parallel MRI (pMRI) reconstruction techniques are commonly used to reduce scan time by undersampling the k-space data. GRAPPA, a k-space based pMRI technique, is widely used clinically because of its robustness. In GRAPPA, the missing k-space data are estimated by solving a set of linear equations; however, this set of equations does not take advantage of the correlations within the missing k-space data. All k-space data in a neighborhood acquired from a phased-array coil are correlated. The correlation can be estimated easily as a self-constraint condition, and formulated as an extra set of linear equations to improve the performance of GRAPPA. The authors propose a modified k-space based pMRI technique called self-constraint GRAPPA (SC-GRAPPA) which combines the linear equations of GRAPPA with these extra equations to solve for the missing k-space data. Since SC-GRAPPA utilizes a least-squares solution of the linear equations, it has a closed-form solution that does not require an iterative solver. The SC-GRAPPA equation was derived by incorporating GRAPPA as a prior estimate. SC-GRAPPA was tested in a uniform phantom and two normal volunteers. MR real-time cardiac cine images with acceleration rate 5 and 6 were reconstructed using GRAPPA and SC-GRAPPA. SC-GRAPPA showed a significantly lower artifact level, and a greater than 10% overall signal-to-noise ratio (SNR) gain over GRAPPA, with more significant SNR gain observed in low-SNR regions of the images. SC-GRAPPA offers improved pMRI reconstruction, and is expected to benefit clinical imaging applications in the future.

  11. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  12. EqualChance: Addressing Intra-set Write Variation to Increase Lifetime of Non-volatile Caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S

    To address the limitations of SRAM such as high-leakage and low-density, researchers have explored use of non-volatile memory (NVM) devices, such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches. A crucial limitation of NVMs, however, is that their write endurance is low and the large intra-set write variation introduced by existing cache management policies may further exacerbate this problem, thereby reducing the cache lifetime significantly. We present EqualChance, a technique to increase cache lifetime by reducing intra-set write variation. EqualChance works by periodically changing the physical cache-block location of a write-intensive data item withinmore » a set to achieve wear-leveling. Simulations using workloads from SPEC CPU2006 suite and HPC (high-performance computing) field show that EqualChance improves the cache lifetime by 4.29X. Also, its implementation overhead is small, and it incurs very small performance and energy loss.« less

  13. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  14. A comparison of 3 wound measurement techniques: effects of pressure ulcer size and shape.

    PubMed

    Bilgin, Mehtap; Güneş, Ulkü Yapucu

    2013-01-01

    The aim of this study was to examine the levels of agreement among 3 techniques used in wound measurement comparing more spherical versus irregularly shaped wounds. The design of this study is evaluative research. Sixty-five consecutive patients with 80 pressure ulcers of various sizes referred from a university hospital in Izmir, Turkey, were evaluated. The 80 pressure ulcers identified on the 65 participants were divided into 2 groups based on pressure ulcer shape and wound surface area. Twenty-four of the 80 ulcers (30%) were characterized as irregularly shaped and greater than 10 cm. Fifty-six were regularly shaped (approximating a circle) and less than 10 cm. Pressure ulcer areas were measured using 3 techniques: measurement with a ruler (wound area was calculated by measuring and multiplying the greatest length by the greatest width perpendicular to the greatest length), wound tracing using graduated acetate paper, and digital planimetry. The level of agreement among the techniques was explored using the intraclass correlation coefficient (ICC). Strong agreement was observed among the techniques when assessing small, more regularly shaped wounds (ICC = 0.95). Modest agreement was achieved when measuring larger, irregularly shaped wounds (ICC = 0.70). Each of these techniques is adequate for measuring surface areas of smaller wounds with an approximately circular shape. Measurement of pressure ulcer area via the ruler method tended to overestimate surface area in larger and more irregularly shaped wounds when compared to acetate and digital planimetry. We recommend digital planimetry or acetate tracing for measurement of larger and more irregularly shaped pressure ulcers in the clinical setting.

  15. A Titration Technique for Demonstrating a Magma Replenishment Model.

    ERIC Educational Resources Information Center

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  16. Evaluation of Primary Immunization Coverage of Infants Under Universal Immunization Programme in an Urban Area of Bangalore City Using Cluster Sampling and Lot Quality Assurance Sampling Techniques

    PubMed Central

    K, Punith; K, Lalitha; G, Suman; BS, Pradeep; Kumar K, Jayanth

    2008-01-01

    Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area. PMID:19876474

  17. Testing a mediation model of psychotherapy process and outcome in psychodynamic psychotherapy: Previous client distress, psychodynamic techniques, dyadic working alliance, and current client distress.

    PubMed

    Kivlighan, Dennis M; Hill, Clara E; Ross, Katherine; Kline, Kathryn; Furhmann, Amy; Sauber, Elizabeth

    2018-01-05

    To test a sequential model of psychotherapy process and outcome, we included previous client distress, therapist psychodynamic techniques, dyadic working alliance, and current client distress. For 114 sets of eight-session segments in 40 cases of psychodynamic psychotherapy, clients completed the Outcome Questionnaire-45 and Inventory of Interpersonal Problems-32 after the first and final session, judges reliably coded one middle sessions on the Psychodynamic subscale of the Multitheoretical List of Therapeutic Interventions, and clients and therapists completed the Working Alliance Inventory after every session. Results indicated that higher use of psychodynamic techniques was associated with higher levels of the working alliance, which in turn was associated decreased client distress; and working alliance was higher later in psychotherapy. There was a significant indirect effect of psychodynamic techniques on decreases in distress mediated by the working alliance. Implications for theory, practice, and research are provided. Clinical or methodological significance of this article: Conducted a longitudinal, latent variable examination of the relationships of psychodynamic techniques and working alliance on client distress. Psychodynamic techniques have an indirect effect on decreases in client distress through the dyadic working alliance.

  18. Predicting survival of Escherichia coli O157:H7 in dry fermented sausage using artificial neural networks.

    PubMed

    Palanichamy, A; Jayas, D S; Holley, R A

    2008-01-01

    The Canadian Food Inspection Agency required the meat industry to ensure Escherichia coli O157:H7 does not survive (experiences > or = 5 log CFU/g reduction) in dry fermented sausage (salami) during processing after a series of foodborne illness outbreaks resulting from this pathogenic bacterium occurred. The industry is in need of an effective technique like predictive modeling for estimating bacterial viability, because traditional microbiological enumeration is a time-consuming and laborious method. The accuracy and speed of artificial neural networks (ANNs) for this purpose is an attractive alternative (developed from predictive microbiology), especially for on-line processing in industry. Data from a study of interactive effects of different levels of pH, water activity, and the concentrations of allyl isothiocyanate at various times during sausage manufacture in reducing numbers of E. coli O157:H7 were collected. Data were used to develop predictive models using a general regression neural network (GRNN), a form of ANN, and a statistical linear polynomial regression technique. Both models were compared for their predictive error, using various statistical indices. GRNN predictions for training and test data sets had less serious errors when compared with the statistical model predictions. GRNN models were better and slightly better for training and test sets, respectively, than was the statistical model. Also, GRNN accurately predicted the level of allyl isothiocyanate required, ensuring a 5-log reduction, when an appropriate production set was created by interpolation. Because they are simple to generate, fast, and accurate, ANN models may be of value for industrial use in dry fermented sausage manufacture to reduce the hazard associated with E. coli O157:H7 in fresh beef and permit production of consistently safe products from this raw material.

  19. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  20. A Survey of Injuries Affecting Pre-Professional Ballet Dancers.

    PubMed

    Caine, Dennis; Bergeron, Glen; Goodwin, Brett J; Thomas, Jessica; Caine, Caroline G; Steinfeld, Sam; Dyck, Kevin; André, Suzanne

    2016-01-01

    A cross-sectional design was employed retrospectively to evaluate injuries self-reported by 71 pre-professional ballet dancers over one season. Some of the descriptive findings of this survey were consistent with those of previous research and suggest particular demographic and injury trends in pre-professional ballet. These results include gender distribution, mean age and age range of participants, training hours, injury location, acute versus overuse injuries, as well as average number of physiotherapy treatments per dancer. Other results provide information that was heretofore unreported or inconsistent with previous investigations. These findings involved proportion of dancers injured, average number of injuries per dancer, overall injury incidence during an 8.5 month period, incidence rate by technique level, mean time loss per injury, proportion of recurrent injury, and activity practiced at time of injury. The results of univariate analyses revealed several significant findings, including a decrease in incidence rate of injury with increased months of experience in the pre-professional program, dancers having lower injury risk in rehearsal and performance than in class, and a reduced risk of injury for dancers at certain technique levels. However, only this latter finding remained significant in multivariate analysis. The results of this study underscore the importance of determining injury rates by gender, technique level, and activity setting in addition to overall injury rates. They also point to the necessity of looking at both overall and individual dancer-based injury risks.

  1. Overcoming barriers to the use of osteopathic manipulation techniques in the emergency department.

    PubMed

    Roberge, Raymond J; Roberge, Marc R

    2009-08-01

    Osteopathic Manipulation Techniques (OMT) have been shown to be effective therapeutic modalities in various clinical settings, but appear to be underutilized in the emergency department (ED) setting. To examine barriers to the use of OMT in the ED and provide suggestions to ameliorate these barriers. Literature review While the medical literature cites numerous obstacles to the use of OMT in the ED setting, most can be positively addressed through education, careful planning, and ongoing research into use of these techniques. Recent prospective clinical trials of OMT have demonstrated the utility of these modalities. Osteopathic Manipulation Techniques are useful therapeutic modalities that could be utilized to a greater degree in the ED. As the number of osteopathic emergency physicians increases, the opportunity to employ these techniques should increase.

  2. Successful adaptation of three-dimensional inversion methodologies for archaeological-scale, total-field magnetic data sets

    NASA Astrophysics Data System (ADS)

    Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.

    2015-08-01

    Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.

  3. Segmentation of pulmonary nodules in three-dimensional CT images by use of a spiral-scanning technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jiahui; Engelmann, Roger; Li Qiang

    2007-12-15

    Accurate segmentation of pulmonary nodules in computed tomography (CT) is an important and difficult task for computer-aided diagnosis of lung cancer. Therefore, the authors developed a novel automated method for accurate segmentation of nodules in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. To simplify nodule segmentation, the 3D VOI was transformed into a two-dimensional (2D) image by use of a key 'spiral-scanning' technique, in which a number of radial lines originating from the center of the VOI spirally scanned the VOI from the 'north pole' to the 'south pole'. Themore » voxels scanned by the radial lines provided a transformed 2D image. Because the surface of a nodule in the 3D image became a curve in the transformed 2D image, the spiral-scanning technique considerably simplified the segmentation method and enabled reliable segmentation results to be obtained. A dynamic programming technique was employed to delineate the 'optimal' outline of a nodule in the 2D image, which corresponded to the surface of the nodule in the 3D image. The optimal outline was then transformed back into 3D image space to provide the surface of the nodule. An overlap between nodule regions provided by computer and by the radiologists was employed as a performance metric for evaluating the segmentation method. The database included two Lung Imaging Database Consortium (LIDC) data sets that contained 23 and 86 CT scans, respectively, with 23 and 73 nodules that were 3 mm or larger in diameter. For the two data sets, six and four radiologists manually delineated the outlines of the nodules as reference standards in a performance evaluation for nodule segmentation. The segmentation method was trained on the first and was tested on the second LIDC data sets. The mean overlap values were 66% and 64% for the nodules in the first and second LIDC data sets, respectively, which represented a higher performance level than those of two existing segmentation methods that were also evaluated by use of the LIDC data sets. The segmentation method provided relatively reliable results for pulmonary nodule segmentation and would be useful for lung cancer quantification, detection, and diagnosis.« less

  4. Research on Self-Management Techniques Used by Students with Disabilities in General Education Settings: A Promise Fulfilled?

    ERIC Educational Resources Information Center

    McDougall, Dennis; Skouge, Jim; Farrell, Anthony; Hoff, Kathy

    2006-01-01

    This comprehensive review synthesizes findings from 43 studies in which students with disabilities utilized behavioral self-management (BSM) techniques in general education settings. Findings suggest that the long-standing promise of BSM as an inclusive technique has been partially fulfilled. The review identifies strengths and limitations of BSM…

  5. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  6. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  7. Infusion volume control and calculation using metronome and drop counter based intravenous infusion therapy helper.

    PubMed

    Park, Kyungnam; Lee, Jangyoung; Kim, Soo-Young; Kim, Jinwoo; Kim, Insoo; Choi, Seung Pill; Jeong, Sikyung; Hong, Sungyoup

    2013-06-01

    This study assessed the method of fluid infusion control using an IntraVenous Infusion Controller (IVIC). Four methods of infusion control (dial flow controller, IV set without correction, IV set with correction and IVIC correction) were used to measure the volume of each technique at two infusion rates. The infused fluid volume with a dial flow controller was significantly larger than other methods. The infused fluid volume was significantly smaller with an IV set without correction over time. Regarding the concordance correlation coefficient (CCC) of infused fluid volume in relation to a target volume, IVIC correction was shown to have the highest level of agreement. The flow rate measured in check mode showed a good agreement with the volume of collected fluid after passing through the IV system. Thus, an IVIC could assist in providing an accurate infusion control. © 2013 Wiley Publishing Asia Pty Ltd.

  8. Detection of giardine gene in local isolates of Giardia duodenalis by polymerase chain reaction (PCR).

    PubMed

    Latifah, I; Teoh, K Y; Wan, K L; Rahmah, M; Normaznah, Y; Rohani, A

    2005-12-01

    Giardia duodenalis is an intestinal parasite that causes diarrhoea and malabsorption in children. The parasite also infects AIDS patients with a weak immune system. A study was carried out on six local isolates of Giardia duodenalis (110, 7304, 6304, M007, 2002 and 6307) from faeces of Orang Asli patients admitted to the Gombak Hospital. WB, a reference pathogenic strain from human and G. muris from a wild mouse, were commercially obtained from the American Type Culture Collection (ATCC). All the isolates were cultured axenically in TYI-S-33 medium. Two sets of primers were used for the techniques: primers LP1 and RP1 and primers LP2 and RP2. The sets of primers amplified giardine gene of 171 bp and 218 bp in sizes respectively. The study showed that the two sets of primers could detect G. duodenalis to the genus and species level specifically.

  9. Max-margin multiattribute learning with low-rank constraint.

    PubMed

    Zhang, Qiang; Chen, Lin; Li, Baoxin

    2014-07-01

    Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.

  10. Detection of buried objects by fusing dual-band infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.

    1993-11-01

    We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, infrared imagery, and ground penetrating radar (GPR), have been used to acquire data on a number of buried mines and mine surrogates. Because the visible wavelength and GPR data are currently incomplete. This paper focuses on the fusion of two-band infrared images. We use feature-level fusion and supervised learning with the probabilistic neural network (PNN) to evaluate detection performance. The novelty of the work lies in the application of advanced target recognition algorithms, the fusion of dual-band infraredmore » images and evaluation of the techniques using two real data sets.« less

  11. A tool for filtering information in complex systems

    PubMed Central

    Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.

    2005-01-01

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373

  12. A tool for filtering information in complex systems.

    PubMed

    Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N

    2005-07-26

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.

  13. Validation of motion correction techniques for liver CT perfusion studies

    PubMed Central

    Chandler, A; Wei, W; Anderson, E F; Herron, D H; Ye, Z; Ng, C S

    2012-01-01

    Objectives Motion in images potentially compromises the evaluation of temporally acquired CT perfusion (CTp) data; image registration should mitigate this, but first requires validation. Our objective was to compare the relative performance of manual, rigid and non-rigid registration techniques to correct anatomical misalignment in acquired liver CTp data sets. Methods 17 data sets in patients with liver tumours who had undergone a CTp protocol were evaluated. Each data set consisted of a cine acquisition during a breath-hold (Phase 1), followed by six further sets of cine scans (each containing 11 images) acquired during free breathing (Phase 2). Phase 2 images were registered to a reference image from Phase 1 cine using two semi-automated intensity-based registration techniques (rigid and non-rigid) and a manual technique (the only option available in the relevant vendor CTp software). The performance of each technique to align liver anatomy was assessed by four observers, independently and blindly, on two separate occasions, using a semi-quantitative visual validation study (employing a six-point score). The registration techniques were statistically compared using an ordinal probit regression model. Results 306 registrations (2448 observer scores) were evaluated. The three registration techniques were significantly different from each other (p=0.03). On pairwise comparison, the semi-automated techniques were significantly superior to the manual technique, with non-rigid significantly superior to rigid (p<0.0001), which in turn was significantly superior to manual registration (p=0.04). Conclusion Semi-automated registration techniques achieved superior alignment of liver anatomy compared with the manual technique. We hope this will translate into more reliable CTp analyses. PMID:22374283

  14. A technique for searching for the 2 K capture in 124Xe with a copper proportional counter

    NASA Astrophysics Data System (ADS)

    Gavrilyuk, Yu. M.; Gangapshev, A. M.; Kazalov, V. V.; Kuzminov, V. V.; Panasenko, S. I.; Ratkevich, S. S.; Tekueva, D. A.; Yakimenko, S. P.

    2015-12-01

    An experimental technique for searching for the 2 K capture in 124Xe with a large low-background copper proportional counter is described. Such an experiment is conducted at the Baksan Neutrino Observatory of the Institute for Nuclear Research of the Russian Academy of Sciences. The experimental setup is located in the Low-Background Deep-Level Laboratory at a depth of 4900 m.w.e., where the flux of muons of cosmic rays is suppressed by a factor of 107 relative to that at the Earth's surface. The setup incorporates a proportional counter and low-background shielding (18 cm of copper, 15 cm of lead, and 8 cm of borated polyethylene). The results of processing the data obtained in 5 months of live measurement time are presented. A new limit on the half-life of 124Xe with respect to the 2 K capture is set at the level of 2.5 × 1021 years.

  15. Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data

    NASA Astrophysics Data System (ADS)

    Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted

    2013-01-01

    Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.

  16. Risk management in the North sea offshore industry: History, status and challenges

    NASA Astrophysics Data System (ADS)

    Smith, E. J.

    1995-10-01

    There have been major changes in the UK and Norwegian offshore safety regimes in the last decade. On the basis of accumulated experience (including some major accidents), there has been a move away from a rigid, prescriptive approach to setting safety standards; it is now recognised that a more flexible, "goal-setting" approach is more suited to achieving cost-effective solutions to offshore safety. In order to adapt to this approach, offshore operators are increasingly using Quantitative Risk Assessment (QRA) techniques as part of their risk management programmes. Structured risk assessment can be used at all stages of a project life-cycle. In the design stages (concept and detailed design), these techniques are valuable tools in ensuring that money is wisely spent on safety-related systems. In the operational stage, QRA can aid the development of procedures. High quality Safety Management Systems (SMSs), covering issues such as training, inspection, and emergency planning, are crucial to maintain "asdesigned" levels of safety and reliability. Audits of SMSs should be carried out all through the operational phase to ensure that risky conditions do not accumulate.

  17. A New Adaptive Framework for Collaborative Filtering Prediction

    PubMed Central

    Almosallam, Ibrahim A.; Shang, Yi

    2010-01-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924

  18. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  19. Co-culture systems and technologies: taking synthetic biology to the next level.

    PubMed

    Goers, Lisa; Freemont, Paul; Polizzi, Karen M

    2014-07-06

    Co-culture techniques find myriad applications in biology for studying natural or synthetic interactions between cell populations. Such techniques are of great importance in synthetic biology, as multi-species cell consortia and other natural or synthetic ecology systems are widely seen to hold enormous potential for foundational research as well as novel industrial, medical and environmental applications with many proof-of-principle studies in recent years. What is needed for co-cultures to fulfil their potential? Cell-cell interactions in co-cultures are strongly influenced by the extracellular environment, which is determined by the experimental set-up, which therefore needs to be given careful consideration. An overview of existing experimental and theoretical co-culture set-ups in synthetic biology and adjacent fields is given here, and challenges and opportunities involved in such experiments are discussed. Greater focus on foundational technology developments for co-cultures is needed for many synthetic biology systems to realize their potential in both applications and answering biological questions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  20. Results of the Greenland ice sheet model initialisation experiments: ISMIP6 - initMIP-Greenland

    NASA Astrophysics Data System (ADS)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew

    2017-04-01

    Ice sheet model initialisation has a large effect on projected future sea-level contributions and gives rise to important uncertainties. The goal of this intercomparison exercise for the continental-scale Greenland ice sheet is therefore to compare, evaluate and improve the initialisation techniques used in the ice sheet modelling community. The initMIP-Greenland project is the first in a series of ice sheet model intercomparison activities within ISMIP6 (Ice Sheet Model Intercomparison Project for CMIP6). The experimental set-up has been designed to allow comparison of the initial present-day state of the Greenland ice sheet between participating models and against observations. Furthermore, the initial states are tested with two schematic forward experiments to evaluate the initialisation in terms of model drift (forward run without any forcing) and response to a large perturbation (prescribed surface mass balance anomaly). We present and discuss results that highlight the wide diversity of data sets, boundary conditions and initialisation techniques used in the community to generate initial states of the Greenland ice sheet.

  1. Designs for the combination of group- and individual-level data

    PubMed Central

    Haneuse, Sebastien; Bartell, Scott

    2012-01-01

    Background Studies of ecologic or aggregate data suffer from a broad range of biases when scientific interest lies with individual-level associations. To overcome these biases, epidemiologists can choose from a range of designs that combine these group-level data with individual-level data. The individual-level data provide information to identify, evaluate, and control bias, while the group-level data are often readily accessible and provide gains in efficiency and power. Within this context, the literature on developing models, particularly multi-level models, is well-established, but little work has been published to help researchers choose among competing designs and plan additional data collection. Methods We review recently proposed “combined” group- and individual-level designs and methods that collect and analyze data at two levels of aggregation. These include aggregate data designs, hierarchical related regression, two-phase designs, and hybrid designs for ecologic inference. Results The various methods differ in (i) the data elements available at the group and individual levels and (ii) the statistical techniques used to combine the two data sources. Implementing these techniques requires care, and it may often be simpler to ignore the group-level data once the individual-level data are collected. A simulation study, based on birth-weight data from North Carolina, is used to illustrate the benefit of incorporating group-level information. Conclusions Our focus is on settings where there are individual-level data to supplement readily accessible group-level data. In this context, no single design is ideal. Choosing which design to adopt depends primarily on the model of interest and the nature of the available group-level data. PMID:21490533

  2. Assessment of pharmacists' job satisfaction and job related stress in Amman.

    PubMed

    Al Khalidi, Doaa; Wazaify, Mayyada

    2013-10-01

    The myriad changes in pharmacy practice in Jordan have transformed the pharmacist's role to be more focused on the patient and his/her therapeutic needs than on just the traditional dispensing. This, in addition to other possible factors, is believed to have influenced pharmacists' job satisfaction and stress level in different practice settings in Jordan. This study aimed to determine the level of job satisfaction and job related stress among pharmacists in Amman. Moreover, the main causes of dissatisfaction and stress-related factors affecting pharmacists at their working positions were also explored. The study was conducted in four pharmacy practice settings: independent and chain community pharmacies as well as private and public hospital pharmacies. The study adopted the self-administered survey methodology technique using a pre-validated pre-piloted questionnaire. The questionnaire was adapted from one previously used in Northern Ireland. Data were entered into SAS database and analysed using descriptive statistics, Chi square and regression analysis. The significance level was set at P < 0.05. The level and factors affecting job satisfaction and job related stress as reported by participating pharmacists. A total of 235 registered pharmacists in Amman were involved. The pharmacists' job satisfaction was significantly affected by the type of pharmacy practice settings (P = 0.038), pharmacists' registration year (P = 0.048) and marital status (P = 0.023). Moreover, job related stress situations like patient care responsibility have been associated significantly with the type of pharmacy practice settings (P = 0.043) and pharmacists' registration year (P = 0.013). Other job stressors like long working hours, lack of advancement, promotion opportunities and poor physician pharmacists' relationship have also been reported by participants. The study concluded that community pharmacists in Amman are found to be less satisfied with their jobs than their hospital counterparts. Pharmacists' job satisfaction should be enhanced to improve pharmacists' motivation and competence. Consequently, this will improve their productivity and provision of pharmaceutical care.

  3. Search for dark matter decay of the free neutron from the UCNA experiment: n →χ +e+e-

    NASA Astrophysics Data System (ADS)

    Sun, X.; Adamek, E.; Allgeier, B.; Blatnik, M.; Bowles, T. J.; Broussard, L. J.; Brown, M. A.-P.; Carr, R.; Clayton, S.; Cude-Woods, C.; Currie, S.; Dees, E. B.; Ding, X.; Filippone, B. W.; García, A.; Geltenbort, P.; Hasan, S.; Hickerson, K. P.; Hoagland, J.; Hong, R.; Hogan, G. E.; Holley, A. T.; Ito, T. M.; Knecht, A.; Liu, C.-Y.; Liu, J.; Makela, M.; Mammei, R.; Martin, J. W.; Melconian, D.; Mendenhall, M. P.; Moore, S. D.; Morris, C. L.; Nepal, S.; Nouri, N.; Pattie, R. W.; Pérez Galván, A.; Phillips, D. G.; Picker, R.; Pitt, M. L.; Plaster, B.; Ramsey, J. C.; Rios, R.; Salvat, D. J.; Saunders, A.; Sondheim, W.; Sjue, S.; Slutsky, S.; Swank, C.; Swift, G.; Tatar, E.; Vogelaar, R. B.; VornDick, B.; Wang, Z.; Wei, W.; Wexler, J.; Womack, T.; Wrede, C.; Young, A. R.; Zeck, B. A.; UCNA Collaboration

    2018-05-01

    It has been proposed recently that a previously unobserved neutron decay branch to a dark matter particle (χ ) could account for the discrepancy in the neutron lifetime observed in experiments that use two different measurement techniques. One of the possible final states discussed includes a single χ along with an e+e- pair. We use data from the UCNA (Ultracold Neutron Asymmetry) experiment to set limits on this decay channel. Coincident electron-like events are detected with ˜4 π acceptance using a pair of detectors that observe a volume of stored ultracold neutrons. The summed kinetic energy (Ee+e-) from such events is used to set limits, as a function of the χ mass, on the branching fraction for this decay channel. For χ masses consistent with resolving the neutron lifetime discrepancy, we exclude this as the dominant dark matter decay channel at ≫5 σ level for 100 90 % confidence level.

  4. Gene regulatory network inference using fused LASSO on multiple data sets

    PubMed Central

    Omranian, Nooshin; Eloundou-Mbebi, Jeanne M. O.; Mueller-Roeber, Bernd; Nikoloski, Zoran

    2016-01-01

    Devising computational methods to accurately reconstruct gene regulatory networks given gene expression data is key to systems biology applications. Here we propose a method for reconstructing gene regulatory networks by simultaneous consideration of data sets from different perturbation experiments and corresponding controls. The method imposes three biologically meaningful constraints: (1) expression levels of each gene should be explained by the expression levels of a small number of transcription factor coding genes, (2) networks inferred from different data sets should be similar with respect to the type and number of regulatory interactions, and (3) relationships between genes which exhibit similar differential behavior over the considered perturbations should be favored. We demonstrate that these constraints can be transformed in a fused LASSO formulation for the proposed method. The comparative analysis on transcriptomics time-series data from prokaryotic species, Escherichia coli and Mycobacterium tuberculosis, as well as a eukaryotic species, mouse, demonstrated that the proposed method has the advantages of the most recent approaches for regulatory network inference, while obtaining better performance and assigning higher scores to the true regulatory links. The study indicates that the combination of sparse regression techniques with other biologically meaningful constraints is a promising framework for gene regulatory network reconstructions. PMID:26864687

  5. Radial sets: interactive visual analysis of large overlapping sets.

    PubMed

    Alsallakh, Bilal; Aigner, Wolfgang; Miksch, Silvia; Hauser, Helwig

    2013-12-01

    In many applications, data tables contain multi-valued attributes that often store the memberships of the table entities to multiple sets such as which languages a person masters, which skills an applicant documents, or which features a product comes with. With a growing number of entities, the resulting element-set membership matrix becomes very rich of information about how these sets overlap. Many analysis tasks targeted at set-typed data are concerned with these overlaps as salient features of such data. This paper presents Radial Sets, a novel visual technique to analyze set memberships for a large number of elements. Our technique uses frequency-based representations to enable quickly finding and analyzing different kinds of overlaps between the sets, and relating these overlaps to other attributes of the table entities. Furthermore, it enables various interactions to select elements of interest, find out if they are over-represented in specific sets or overlaps, and if they exhibit a different distribution for a specific attribute compared to the rest of the elements. These interactions allow formulating highly-expressive visual queries on the elements in terms of their set memberships and attribute values. As we demonstrate via two usage scenarios, Radial Sets enable revealing and analyzing a multitude of overlapping patterns between large sets, beyond the limits of state-of-the-art techniques.

  6. A retrospective randomized study to compare the energy delivered using CDE with different techniques and OZil settings by different surgeons in phacoemulsification.

    PubMed

    Chen, Ming; Sweeney, Henry W; Luke, Becky; Chen, Mindy; Brown, Mathew

    2009-01-01

    Cumulative dissipated energy (CDE) was used with Infiniti((R)) Vision System (Alcon Labs) as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti((R)) unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons.

  7. A retrospective randomized study to compare the energy delivered using CDE with different techniques and OZil® settings by different surgeons in phacoemulsification

    PubMed Central

    Chen, Ming; Sweeney, Henry W; Luke, Becky; Chen, Mindy; Brown, Mathew

    2009-01-01

    Cumulative dissipated energy (CDE) was used with Infiniti® Vision System (Alcon Labs) as an energy delivery guide to compare four different phaco techniques and phaco settings. The supracapsular phaco technique and burst mode is known for efficiency and surgery is faster compared with the old phaco unit. In this study, we found that supracapsular phaco with burst mode had the least CDE in both cataract and nuclear sclerosis cataract with the new Infiniti® unit. We suggest that CDE can be used as one of the references to modify technique and setting to improve outcome for surgeons, especially for new surgeons. PMID:19688027

  8. Unsupervised classification of earth resources data.

    NASA Technical Reports Server (NTRS)

    Su, M. Y.; Jayroe, R. R., Jr.; Cummings, R. E.

    1972-01-01

    A new clustering technique is presented. It consists of two parts: (a) a sequential statistical clustering which is essentially a sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by existing supervised maximum liklihood classification technique.

  9. Probabilistic Photometric Redshifts in the Era of Petascale Astronomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrasco Kind, Matias

    2014-01-01

    With the growth of large photometric surveys, accurately estimating photometric redshifts, preferably as a probability density function (PDF), and fully understanding the implicit systematic uncertainties in this process has become increasingly important. These surveys are expected to obtain images of billions of distinct galaxies. As a result, storing and analyzing all of these photometric redshift PDFs will be non-trivial, and this challenge becomes even more severe if a survey plans to compute and store multiple different PDFs. In this thesis, we have developed an end-to-end framework that will compute accurate and robust photometric redshift PDFs for massive data sets bymore » using two new, state-of-the-art machine learning techniques that are based on a random forest and a random atlas, respectively. By using data from several photometric surveys, we demonstrate the applicability of these new techniques, and we demonstrate that our new approach is among the best techniques currently available. We also show how different techniques can be combined by using novel Bayesian techniques to improve the photometric redshift precision to unprecedented levels while also presenting new approaches to better identify outliers. In addition, our framework provides supplementary information regarding the data being analyzed, including unbiased estimates of the accuracy of the technique without resorting to a validation data set, identification of poor photometric redshift areas within the parameter space occupied by the spectroscopic training data, and a quantification of the relative importance of the variables used during the estimation process. Furthermore, we present a new approach to represent and store photometric redshift PDFs by using a sparse representation with outstanding compression and reconstruction capabilities. We also demonstrate how this framework can also be directly incorporated into cosmological analyses. The new techniques presented in this thesis are crucial to enable the development of precision cosmology in the era of petascale astronomical surveys.« less

  10. DNA-based cryptographic methods for data hiding in DNA media.

    PubMed

    Marwan, Samiha; Shawish, Ahmed; Nagaty, Khaled

    2016-12-01

    Information security can be achieved using cryptography, steganography or a combination of them, where data is firstly encrypted using any of the available cryptography techniques and then hid into any hiding medium. Recently, the famous genomic DNA has been introduced as a hiding medium, known as DNA steganography, due to its notable ability to hide huge data sets with a high level of randomness and hence security. Despite the numerous cryptography techniques, to our knowledge only the vigenere cipher and the DNA-based playfair cipher have been combined with the DNA steganography, which keeps space for investigation of other techniques and coming up with new improvements. This paper presents a comprehensive analysis between the DNA-based playfair, vigenere, RSA and the AES ciphers, each combined with a DNA hiding technique. The conducted analysis reports the performance diversity of each combined technique in terms of security, speed, hiding capacity in addition to both key size and data size. Moreover, this paper proposes a modification of the current combined DNA-based playfair cipher technique, which makes it not only simple and fast but also provides a significantly higher hiding capacity and security. The conducted extensive experimental studies confirm such outstanding performance in comparison with all the discussed combined techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Learning Semantic Tags from Big Data for Clinical Text Representation.

    PubMed

    Li, Yanpeng; Liu, Hongfang

    2015-01-01

    In clinical text mining, it is one of the biggest challenges to represent medical terminologies and n-gram terms in sparse medical reports using either supervised or unsupervised methods. Addressing this issue, we propose a novel method for word and n-gram representation at semantic level. We first represent each word by its distance with a set of reference features calculated by reference distance estimator (RDE) learned from labeled and unlabeled data, and then generate new features using simple techniques of discretization, random sampling and merging. The new features are a set of binary rules that can be interpreted as semantic tags derived from word and n-grams. We show that the new features significantly outperform classical bag-of-words and n-grams in the task of heart disease risk factor extraction in i2b2 2014 challenge. It is promising to see that semantics tags can be used to replace the original text entirely with even better prediction performance as well as derive new rules beyond lexical level.

  12. High-level ab initio enthalpies of formation of 2,5-dimethylfuran, 2-methylfuran, and furan.

    PubMed

    Feller, David; Simmie, John M

    2012-11-29

    A high-level ab initio thermochemical technique, known as the Feller-Petersen-Dixon method, is used to calculate the total atomization energies and hence the enthalpies of formation of 2,5-dimethylfuran, 2-methylfuran, and furan itself as a means of rationalizing significant discrepancies in the literature. In order to avoid extremely large standard coupled cluster theory calculations, the explicitly correlated CCSD(T)-F12b variation was used with basis sets up to cc-pVQZ-F12. After extrapolating to the complete basis set limit and applying corrections for core/valence, scalar relativistic, and higher order effects, the final Δ(f)H° (298.15 K) values, with the available experimental values in parentheses are furan -34.8 ± 3 (-34.7 ± 0.8), 2-methylfuran -80.3 ± 5 (-76.4 ± 1.2), and 2,5-dimethylfuran -124.6 ± 6 (-128.1 ± 1.1) kJ mol(-1). The theoretical results exhibit a compelling internal consistency.

  13. Micro-algae come of age as a platform for recombinant protein production

    PubMed Central

    Specht, Elizabeth; Miyake-Stoner, Shigeki

    2010-01-01

    A complete set of genetic tools is still being developed for the micro-alga Chlamydomonas reinhardtii. Yet even with this incomplete set, this photosynthetic single-celled plant has demonstrated significant promise as a platform for recombinant protein expression. In recent years, techniques have been developed that allow for robust expression of genes from both the nuclear and plastid genome. With these advances, many research groups have examined the pliability of this and other micro-algae as biological machines capable of producing recombinant peptides and proteins. This review describes recent successes in recombinant protein production in Chlamydomonas, including production of complex mammalian therapeutic proteins and monoclonal antibodies at levels sufficient for production at economic parity with existing production platforms. These advances have also shed light on the details of algal protein production at the molecular level, and provide insight into the next steps for optimizing micro-algae as a useful platform for the production of therapeutic and industrially relevant recombinant proteins. PMID:20556634

  14. Integrated resource inventory for southcentral Alaska (INTRISCA)

    NASA Technical Reports Server (NTRS)

    Burns, T.; Carson-Henry, C.; Morrissey, L. A.

    1981-01-01

    The Integrated Resource Inventory for Southcentral Alaska (INTRISCA) Project comprised an integrated set of activities related to the land use planning and resource management requirements of the participating agencies within the southcentral region of Alaska. One subproject involved generating a region-wide land cover inventory of use to all participating agencies. Toward this end, participants first obtained a broad overview of the entire region and identified reasonable expectations of a LANDSAT-based land cover inventory through evaluation of an earlier classification generated during the Alaska Water Level B Study. Classification of more recent LANDSAT data was then undertaken by INTRISCA participants. The latter classification produced a land cover data set that was more specifically related to individual agency needs, concurrently providing a comprehensive training experience for Alaska agency personnel. Other subprojects employed multi-level analysis techniques ranging from refinement of the region-wide classification and photointerpretation, to digital edge enhancement and integration of land cover data into a geographic information system (GIS).

  15. Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images

    NASA Astrophysics Data System (ADS)

    Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim

    2014-04-01

    Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.

  16. Biophotonics: the big picture

    NASA Astrophysics Data System (ADS)

    Marcu, Laura; Boppart, Stephen A.; Hutchinson, Mark R.; Popp, Jürgen; Wilson, Brian C.

    2018-02-01

    The 5th International Conference on Biophotonics (ICOB) held April 30 to May 1, 2017, in Fremantle, Western Australia, brought together opinion leaders to discuss future directions for the field and opportunities to consider. The first session of the conference, "How to Set a Big Picture Biophotonics Agenda," was focused on setting the stage for developing a vision and strategies for translation and impact on society of biophotonic technologies. The invited speakers, panelists, and attendees engaged in discussions that focused on opportunities and promising applications for biophotonic techniques, challenges when working at the confluence of the physical and biological sciences, driving factors for advances of biophotonic technologies, and educational opportunities. We share a summary of the presentations and discussions. Three main themes from the conference are presented in this position paper that capture the current status, opportunities, challenges, and future directions of biophotonics research and key areas of applications: (1) biophotonics at the nano- to microscale level; (2) biophotonics at meso- to macroscale level; and (3) biophotonics and the clinical translation conundrum.

  17. Machine-printed Arabic OCR

    NASA Astrophysics Data System (ADS)

    Hassibi, Khosrow M.

    1994-02-01

    This paper presents a brief overview of our research in the development of an OCR system for recognition of machine-printed texts in languages that use the Arabic alphabet. The cursive nature of machine-printed Arabic makes the segmentation of words into letters a challenging problem. In our approach, through a novel preliminary segmentation technique, a word is broken into pieces where each piece may not represent a valid letter in general. Neural networks trained on a training sample set of about 500 Arabic text images are used for recognition of these pieces. The rules governing the alphabet and character-level contextual information are used for recombining these pieces into valid letters. Higher-level contextual analysis schemes including the use of an Arabic lexicon and n-grams is also under development and are expected to improve the word recognition accuracy. The segmentation, recognition, and contextual analysis processes are closely integrated using a feedback scheme. The details of preparation of the training set and some recent results on training of the networks will be presented.

  18. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  19. Coherent Two-Mode Dynamics of a Nanowire Force Sensor

    NASA Astrophysics Data System (ADS)

    Braakman, Floris R.; Rossi, Nicola; Tütüncüoglu, Gözde; Morral, Anna Fontcuberta i.; Poggio, Martino

    2018-05-01

    Classically coherent dynamics analogous to those of quantum two-level systems are studied in the setting of force sensing. We demonstrate quantitative control over the coupling between two orthogonal mechanical modes of a nanowire cantilever through measurement of avoided crossings as we deterministically position the nanowire inside an electric field. Furthermore, we demonstrate Rabi oscillations between the two mechanical modes in the strong-coupling regime. These results give prospects of implementing coherent two-mode control techniques for force-sensing signal enhancement.

  20. discovery toolset for Emulytics v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, David; Crussell, Jonathan

    The discovery toolset for Emulytics enables the construction of high-fidelity emulation models of systems. The toolset consists of a set of tools and techniques to automatically go from network discovery of operational systems to emulating those complex systems. Our toolset combines data from host discovery and network mapping tools into an intermediate representation that can then be further refined. Once the intermediate representation reaches the desired state, our toolset supports emitting the Emulytics models with varying levels of specificity based on experiment needs.

  1. Modeling the interaction of biological cells with a solidifying interface

    NASA Astrophysics Data System (ADS)

    Chang, Anthony; Dantzig, Jonathan A.; Darr, Brian T.; Hubel, Allison

    2007-10-01

    In this article, we develop a modified level set method for modeling the interaction of particles with a solidifying interface. The dynamic computation of the van der Waals and drag forces between the particles and the solidification front leads to a problem of multiple length scales, which we resolve using adaptive grid techniques. We present a variety of example problems to demonstrate the accuracy and utility of the method. We also use the model to interpret experimental results obtained using directional solidification in a cryomicroscope.

  2. Reduced modeling of flexible structures for decentralized control

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Tan, T. M.; Bahar, L. Y.; Konstantinidis, M. F.

    1986-01-01

    Based upon the modified finite element-transfer matrix method, this paper presents a technique for reduced modeling of flexible structures for decentralized control. The modeling decisions are carried out at (finite-) element level, and are dictated by control objectives. A simply supported beam with two sets of actuators and sensors (linear force actuator and linear position and velocity sensors) is considered for illustration. In this case, it is conjectured that the decentrally controlled closed loop system is guaranteed to be at least marginally stable.

  3. Mapping soil types from multispectral scanner data.

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Zachary, A. L.

    1971-01-01

    Multispectral remote sensing and computer-implemented pattern recognition techniques were used for automatic ?mapping' of soil types. This approach involves subjective selection of a set of reference samples from a gray-level display of spectral variations which was generated by a computer. Each resolution element is then classified using a maximum likelihood ratio. Output is a computer printout on which the researcher assigns a different symbol to each class. Four soil test areas in Indiana were experimentally examined using this approach, and partially successful results were obtained.

  4. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  5. Overcoming Barriers to the Use of Osteopathic Manipulation Techniques in the Emergency Department

    PubMed Central

    Roberge, Raymond J.; Roberge, Marc R.

    2009-01-01

    Background: Osteopathic Manipulation Techniques (OMT) have been shown to be effective therapeutic modalities in various clinical settings, but appear to be underutilized in the emergency department (ED) setting. Objective: To examine barriers to the use of OMT in the ED and provide suggestions to ameliorate these barriers. Methods: Literature review Results: While the medical literature cites numerous obstacles to the use of OMT in the ED setting, most can be positively addressed through education, careful planning, and ongoing research into use of these techniques. Recent prospective clinical trials of OMT have demonstrated the utility of these modalities. Conclusion: Osteopathic Manipulation Techniques are useful therapeutic modalities that could be utilized to a greater degree in the ED. As the number of osteopathic emergency physicians increases, the opportunity to employ these techniques should increase. PMID:19718381

  6. Streak Imaging Flow Cytometer for Rare Cell Analysis.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Ossandon, Miguel; Prickril, Ben; Rasooly, Avraham

    2017-01-01

    There is a need for simple and affordable techniques for cytology for clinical applications, especially for point-of-care (POC) medical diagnostics in resource-poor settings. However, this often requires adapting expensive and complex laboratory-based techniques that often require significant power and are too massive to transport easily. One such technique is flow cytometry, which has great potential for modification due to the simplicity of the principle of optical tracking of cells. However, it is limited in that regard due to the flow focusing technique used to isolate cells for optical detection. This technique inherently reduces the flow rate and is therefore unsuitable for rapid detection of rare cells which require large volume for analysis.To address these limitations, we developed a low-cost, mobile flow cytometer based on streak imaging. In our new configuration we utilize a simple webcam for optical detection over a large area associated with a wide-field flow cell. The new flow cell is capable of larger volume and higher throughput fluorescence detection of rare cells than the flow cells with hydrodynamic focusing used in conventional flow cytometry. The webcam is an inexpensive, commercially available system, and for fluorescence analysis we use a 1 W 450 nm blue laser to excite Syto-9 stained cells with emission at 535 nm. We were able to detect low concentrations of stained cells at high flow rates of 10 mL/min, which is suitable for rapidly analyzing larger specimen volumes to detect rare cells at appropriate concentration levels. The new rapid detection capabilities, combined with the simplicity and low cost of this device, suggest a potential for clinical POC flow cytometry in resource-poor settings associated with global health.

  7. Cascade Error Projection: A Learning Algorithm for Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1996-01-01

    In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.

  8. Probabilistic topic modeling for the analysis and classification of genomic sequences

    PubMed Central

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  9. Analysis of calibration materials to improve dual-energy CT scanning for petrophysical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyalasomavaiula, K.; McIntyre, D.; Jain, J.

    2011-01-01

    Dual energy CT-scanning is a rapidly emerging imaging technique employed in non-destructive evaluation of various materials. Although CT (Computerized Tomography) has been used for characterizing rocks and visualizing and quantifying multiphase flow through rocks for over 25 years, most of the scanning is done at a voltage setting above 100 kV for taking advantage of the Compton scattering (CS) effect, which responds to density changes. Below 100 kV the photoelectric effect (PE) is dominant which responds to the effective atomic numbers (Zeff), which is directly related to the photo electric factor. Using the combination of the two effects helps inmore » better characterization of reservoir rocks. The most common technique for dual energy CT-scanning relies on homogeneous calibration standards to produce the most accurate decoupled data. However, the use of calibration standards with impurities increases the probability of error in the reconstructed data and results in poor rock characterization. This work combines ICP-OES (inductively coupled plasma optical emission spectroscopy) and LIBS (laser induced breakdown spectroscopy) analytical techniques to quantify the type and level of impurities in a set of commercially purchased calibration standards used in dual-energy scanning. The Zeff data on the calibration standards with and without impurity data were calculated using the weighted linear combination of the various elements present and used in calculating Zeff using the dual energy technique. Results show 2 to 5% difference in predicted Zeff values which may affect the corresponding log calibrations. The effect that these techniques have on improving material identification data is discussed and analyzed. The workflow developed in this paper will translate to a more accurate material identification estimates for unknown samples and improve calibration of well logging tools.« less

  10. High Resolution Digital Surface Model For Production Of Airport Obstruction Charts Using Spaceborne SAR Sensors

    NASA Astrophysics Data System (ADS)

    Oliveira, Henrique; Rodrigues, Marco; Radius, Andrea

    2012-01-01

    Airport Obstruction Charts (AOCs) are graphical representations of natural or man-made obstructions (its locations and heights) around airfields, according to International Civil Aviation Organization (ICAO) Annexes 4, 14 and 15. One of the most important types of data used in AOCs production/update tasks is a Digital Surface Model (first reflective surface) of the surveyed area. The development of advanced remote sensing technologies provide the available tools for obstruction data acquisition, while Geographic Information Systems (GIS) present the perfect platform for storing and analyzing this type of data, enabling the production of digital ACOs, greatly contributing to the increase of the situational awareness of pilots and enhancing the air navigation safety level [1]. Data acquisition corresponding to the first reflective surface can be obtained through the use of Airborne Laser-Scanning and Light Detection and Ranging (ALS/LIDAR) or Spaceborne SAR Systems. The need of surveying broad areas, like the entire territory of a state, shows that Spaceborne SAR systems are the most adequate in economic and feasibility terms of the process, to perform the monitoring and producing a high resolution Digital Surface Model (DSM). The high resolution DSM generation depends on many factors: the available data set, the used technique and the setting parameters. To increase the precision and obtain high resolution products, two techniques are available using a stack of data: the PS (Permanent Scatterers) technique [2], that uses large stack of data to identify many stable and coherent targets through multi- temporal analysis, removing the atmospheric contribution and to minimize the estimation errors, and the Small Baseline Subset (SBAS) technique ([3],[4]), that relies on the use of small baseline SAR interferograms and on the application of the so called singular value decomposition (SVD) method, in order to link independent SAR acquisition data sets, separated by large baselines, thus increasing the number of data used for the analysis.

  11. Assessment of the transportation route of oversize and excessive loads in relation to the load-bearing capacity of existing bridges

    NASA Astrophysics Data System (ADS)

    Doležel, Jiří; Novák, Drahomír; Petrů, Jan

    2017-09-01

    Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.

  12. Using Trained Pixel Classifiers to Select Images of Interest

    NASA Technical Reports Server (NTRS)

    Mazzoni, D.; Wagstaff, K.; Castano, R.

    2004-01-01

    We present a machine-learning-based approach to ranking images based on learned priorities. Unlike previous methods for image evaluation, which typically assess the value of each image based on the presence of predetermined specific features, this method involves using two levels of machine-learning classifiers: one level is used to classify each pixel as belonging to one of a group of rather generic classes, and another level is used to rank the images based on these pixel classifications, given some example rankings from a scientist as a guide. Initial results indicate that the technique works well, producing new rankings that match the scientist's rankings significantly better than would be expected by chance. The method is demonstrated for a set of images collected by a Mars field-test rover.

  13. Structural methodologies for auditing SNOMED.

    PubMed

    Wang, Yue; Halper, Michael; Min, Hua; Perl, Yehoshua; Chen, Yan; Spackman, Kent A

    2007-10-01

    SNOMED is one of the leading health care terminologies being used worldwide. As such, quality assurance is an important part of its maintenance cycle. Methodologies for auditing SNOMED based on structural aspects of its organization are presented. In particular, automated techniques for partitioning SNOMED into smaller groups of concepts based primarily on relationships patterns are defined. Two abstraction networks, the area taxonomy and p-area taxonomy, are derived from the partitions. The high-level views afforded by these abstraction networks form the basis for systematic auditing. The networks tend to highlight errors that manifest themselves as irregularities at the abstract level. They also support group-based auditing, where sets of purportedly similar concepts are focused on for review. The auditing methodologies are demonstrated on one of SNOMED's top-level hierarchies. Errors discovered during the auditing process are reported.

  14. Effective Multi-Query Expansions: Collaborative Deep Networks for Robust Landmark Retrieval.

    PubMed

    Wang, Yang; Lin, Xuemin; Wu, Lin; Zhang, Wenjie

    2017-03-01

    Given a query photo issued by a user (q-user), the landmark retrieval is to return a set of photos with their landmarks similar to those of the query, while the existing studies on the landmark retrieval focus on exploiting geometries of landmarks for similarity matches between candidate photos and a query photo. We observe that the same landmarks provided by different users over social media community may convey different geometry information depending on the viewpoints and/or angles, and may, subsequently, yield very different results. In fact, dealing with the landmarks with low quality shapes caused by the photography of q-users is often nontrivial and has seldom been studied. In this paper, we propose a novel framework, namely, multi-query expansions, to retrieve semantically robust landmarks by two steps. First, we identify the top- k photos regarding the latent topics of a query landmark to construct multi-query set so as to remedy its possible low quality shape. For this purpose, we significantly extend the techniques of Latent Dirichlet Allocation. Then, motivated by the typical collaborative filtering methods, we propose to learn a collaborative deep networks-based semantically, nonlinear, and high-level features over the latent factor for landmark photo as the training set, which is formed by matrix factorization over collaborative user-photo matrix regarding the multi-query set. The learned deep network is further applied to generate the features for all the other photos, meanwhile resulting into a compact multi-query set within such space. Then, the final ranking scores are calculated over the high-level feature space between the multi-query set and all other photos, which are ranked to serve as the final ranking list of landmark retrieval. Extensive experiments are conducted on real-world social media data with both landmark photos together with their user information to show the superior performance over the existing methods, especially our recently proposed multi-query based mid-level pattern representation method [1].

  15. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  16. Analysis of control system responses for aircraft stability and efficient numerical techniques for real-time simulations

    NASA Astrophysics Data System (ADS)

    Stroe, Gabriela; Andrei, Irina-Carmen; Frunzulica, Florin

    2017-01-01

    The objectives of this paper are the study and the implementation of both aerodynamic and propulsion models, as linear interpolations using look-up tables in a database. The aerodynamic and propulsion dependencies on state and control variable have been described by analytic polynomial models. Some simplifying hypotheses were made in the development of the nonlinear aircraft simulations. The choice of a certain technique to use depends on the desired accuracy of the solution and the computational effort to be expended. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. The engine power dynamic response was modeled with an additional state equation as first order lag in the actual power level response to commanded power level was computed as a function of throttle position. The number of control inputs and engine power states varied depending on the number of control surfaces and aircraft engines. The set of coupled, nonlinear, first-order ordinary differential equations that comprise the simulation model can be represented by the vector differential equation. A linear time-invariant (LTI) system representing aircraft dynamics for small perturbations about a reference trim condition is given by the state and output equations present. The gradients are obtained numerically by perturbing each state and control input independently and recording the changes in the trimmed state and output equations. This is done using the numerical technique of central finite differences, including the perturbations of the state and control variables. For a reference trim condition of straight and level flight, linearization results in two decoupled sets of linear, constant-coefficient differential equations for longitudinal and lateral / directional motion. The linearization is valid for small perturbations about the reference trim condition. Experimental aerodynamic and thrust data are used to model the applied aerodynamic and propulsion forces and moments for arbitrary states and controls. There is no closed form solution to such problems, so the equations must be solved using numerical integration. Techniques for solving this initial value problem for ordinary differential equations are employed to obtain approximate solutions at discrete points along the aircraft state trajectory.

  17. Intra-operative localisation of thoracic spine level: a simple "'K'-wire in pedicle" technique.

    PubMed

    Thambiraj, Sathya; Quraishi, Nasir A

    2012-05-01

    To describe a simple and reliable method of intra-operative localisation of thoracic spine in a single surgical setting. Intra-operative localisation of thoracic spine levels can be difficult due to anatomical constraints, such as scapular shadow, patient's size and poor bone quality. This is particularly true in cases of thoracic discectomies in which the vertebral bodies appear normal. There are several methods described in recent literature to address this. Many of them require a separate procedure which was performed often the previous day. We report a technique which addresses the issue of localising thoracic level intra-operatively. After induction of general anaesthesia, the patient was placed prone and the pedicle of interest was identified using fluoroscopy. A K-wire was then inserted percutaneously into this pedicle under image guidance [confirmed in the antero-posterior (AP) and lateral views]. The wire was then cut close to the skin after bending it. The patient was now positioned laterally and the intended procedure performed through an anterior trans-thoracic approach. The 'K' wire was removed at the end of the procedure. We routinely used this technique in all our thoracic discectomies (four cases in 2 years). There were no intra-operative complications. This method is simple, avoids the patient undergoing two procedures and requires no more ability than placing an implant in the pedicle under fluoroscopy. Placing the 'K' wire into a fixed point like the pedicle facilitates rapid intra-operative viewing of the level of interest and is removed easily at the conclusion of surgery.

  18. Modeling the Effects of Light and Sucrose on In Vitro Propagated Plants: A Multiscale System Analysis Using Artificial Intelligence Technology

    PubMed Central

    Gago, Jorge; Martínez-Núñez, Lourdes; Landín, Mariana; Flexas, Jaume; Gallego, Pedro P.

    2014-01-01

    Background Plant acclimation is a highly complex process, which cannot be fully understood by analysis at any one specific level (i.e. subcellular, cellular or whole plant scale). Various soft-computing techniques, such as neural networks or fuzzy logic, were designed to analyze complex multivariate data sets and might be used to model large such multiscale data sets in plant biology. Methodology and Principal Findings In this study we assessed the effectiveness of applying neuro-fuzzy logic to modeling the effects of light intensities and sucrose content/concentration in the in vitro culture of kiwifruit on plant acclimation, by modeling multivariate data from 14 parameters at different biological scales of organization. The model provides insights through application of 14 sets of straightforward rules and indicates that plants with lower stomatal aperture areas and higher photoinhibition and photoprotective status score best for acclimation. The model suggests the best condition for obtaining higher quality acclimatized plantlets is the combination of 2.3% sucrose and photonflux of 122–130 µmol m−2 s−1. Conclusions Our results demonstrate that artificial intelligence models are not only successful in identifying complex non-linear interactions among variables, by integrating large-scale data sets from different levels of biological organization in a holistic plant systems-biology approach, but can also be used successfully for inferring new results without further experimental work. PMID:24465829

  19. Modeling the effects of light and sucrose on in vitro propagated plants: a multiscale system analysis using artificial intelligence technology.

    PubMed

    Gago, Jorge; Martínez-Núñez, Lourdes; Landín, Mariana; Flexas, Jaume; Gallego, Pedro P

    2014-01-01

    Plant acclimation is a highly complex process, which cannot be fully understood by analysis at any one specific level (i.e. subcellular, cellular or whole plant scale). Various soft-computing techniques, such as neural networks or fuzzy logic, were designed to analyze complex multivariate data sets and might be used to model large such multiscale data sets in plant biology. In this study we assessed the effectiveness of applying neuro-fuzzy logic to modeling the effects of light intensities and sucrose content/concentration in the in vitro culture of kiwifruit on plant acclimation, by modeling multivariate data from 14 parameters at different biological scales of organization. The model provides insights through application of 14 sets of straightforward rules and indicates that plants with lower stomatal aperture areas and higher photoinhibition and photoprotective status score best for acclimation. The model suggests the best condition for obtaining higher quality acclimatized plantlets is the combination of 2.3% sucrose and photonflux of 122-130 µmol m(-2) s(-1). Our results demonstrate that artificial intelligence models are not only successful in identifying complex non-linear interactions among variables, by integrating large-scale data sets from different levels of biological organization in a holistic plant systems-biology approach, but can also be used successfully for inferring new results without further experimental work.

  20. Endoscopic resection of subepithelial tumors.

    PubMed

    Schmidt, Arthur; Bauder, Markus; Riecken, Bettina; Caca, Karel

    2014-12-16

    Management of subepithelial tumors (SETs) remains challenging. Endoscopic ultrasound (EUS) has improved differential diagnosis of these tumors but a definitive diagnosis on EUS findings alone can be achieved in the minority of cases. Complete endoscopic resection may provide a reasonable approach for tissue acquisition and may also be therapeutic in case of malignant lesions. Small SET restricted to the submucosa can be removed with established basic resection techniques. However, resection of SET arising from deeper layers of the gastrointestinal wall requires advanced endoscopic methods and harbours the risk of perforation. Innovative techniques such as submucosal tunneling and full thickness resection have expanded the frontiers of endoscopic therapy in the past years. This review will give an overview about endoscopic resection techniques of SET with a focus on novel methods.

  1. Analysis of Setting Efficacy in Young Male and Female Volleyball Players.

    PubMed

    González-Silva, Jara; Domínguez, Alberto Moreno; Fernández-Echeverría, Carmen; Rabaz, Fernando Claver; Arroyo, M Perla Moreno

    2016-12-01

    The main objective of this study was to analyse the variables that predicted setting efficacy in complex I (KI) in volleyball, in formative categories and depending on gender. The study sample was comprised of 5842 game actions carried out by the 16 male category and the 18 female category teams that participated in the Under-16 Spanish Championship. The dependent variable was setting efficacy. The independent variables were grouped into: serve variables (a serve zone, the type of serve, striking technique, an in-game role of the server and serve direction), reception variables (a reception zone, a receiver player and reception efficacy) and setting variables (a setter's position, a setting zone, the type of a set, setting technique, a set's area and tempo of a set). Multinomial logistic regression showed that the best predictive variables of setting efficacy, both in female and male categories, were reception efficacy, setting technique and tempo of a set. In the male category, the jump serve was the greatest predictor of setting efficacy, while in the female category, it was the set's area. Therefore, in the male category, it was not only the preceding action that affected setting efficacy, but also the serve. On the contrary, in the female category, only variables of the action itself and of the previous action, reception, affected setting efficacy. The results obtained in the present study should be taken into account in the training process of both male and female volleyball players in formative stages.

  2. Upper limb joint kinetics of three sitting pivot wheelchair transfer techniques in individuals with spinal cord injury

    PubMed Central

    Kankipati, Padmaja; Boninger, Michael L.; Gagnon, Dany; Cooper, Rory A.; Koontz, Alicia M.

    2015-01-01

    Study design Repeated measures design. Objective This study compared the upper extremity (UE) joint kinetics between three transfer techniques. Setting Research laboratory. Methods Twenty individuals with spinal cord injury performed three transfer techniques from their wheelchair to a level tub bench. Two of the techniques involved a head–hips method with leading hand position close (HH-I) and far (HH-A) from the body, and the third technique with the trunk upright (TU) and hand far from body. Motion analysis equipment recorded upper body movements and force sensors recorded their hand and feet reaction forces during the transfers. Results Several significant differences were found between HH-A and HH-I and TU and HH-I transfers indicating that hand placement was a key factor influencing the UE joint kinetics. Peak resultant hand, elbow, and shoulder joint forces were significantly higher for the HH-A and TU techniques at the trailing arm (P < 0.036) and lower at the leading arm (P < 0.021), compared to the HH-I technique. Conclusion Always trailing with the same arm if using HH-A or TU could predispose that arm to overuse related pain and injuries. Technique training should focus on initial hand placement close to the body followed by the amount of trunk flexion needed to facilitate movement. PMID:25130053

  3. Reducing substance abuse during pregnancy. Discriminating among levels of response in a prenatal setting.

    PubMed

    Corse, S J; Smith, M

    1998-01-01

    Providers in prenatal care settings are well-positioned to help pregnant women with substance abuse problems take the first steps toward recovery. This study reports the results of the ANGELS Program, a program of enhanced prenatal care designed to reduce substance use among pregnant women. In a suburban office serving a broad range of pregnant women, certified nurse-midwives (CNMs) and on-site addictions counselors addressed substance abuse during prenatal care. This paper describes a cohort of 77 pregnant women who were identified as abusers of alcohol and/or other drugs at the start of pregnancy. According to a level of change rating assigned by the CNM at delivery, 51% of women were able to be largely abstinent during their pregnancy, 35% had reduced their use somewhat, and 14% had shown no change in use. Discriminant analysis techniques were used to learn what characteristics differentiated women in these three level of change groups. Baseline variables that differentiated the groups included severity of cocaine and cannabis use, psychosocial stressors, and initiation of prenatal care. Significant process variables included number of prenatal visits and contact with the addictions counselors. Clinical vignettes illustrate the differences among women in the three level of change groups. Implications of the results are discussed.

  4. Functional thermo-dynamics: a generalization of dynamic density functional theory to non-isothermal situations.

    PubMed

    Anero, Jesús G; Español, Pep; Tarazona, Pedro

    2013-07-21

    We present a generalization of Density Functional Theory (DFT) to non-equilibrium non-isothermal situations. By using the original approach set forth by Gibbs in his consideration of Macroscopic Thermodynamics (MT), we consider a Functional Thermo-Dynamics (FTD) description based on the density field and the energy density field. A crucial ingredient of the theory is an entropy functional, which is a concave functional. Therefore, there is a one to one connection between the density and energy fields with the conjugate thermodynamic fields. The connection between the three levels of description (MT, DFT, FTD) is clarified through a bridge theorem that relates the entropy of different levels of description and that constitutes a generalization of Mermin's theorem to arbitrary levels of description whose relevant variables are connected linearly. Although the FTD level of description does not provide any new information about averages and correlations at equilibrium, it is a crucial ingredient for the dynamics in non-equilibrium states. We obtain with the technique of projection operators the set of dynamic equations that describe the evolution of the density and energy density fields from an initial non-equilibrium state towards equilibrium. These equations generalize time dependent density functional theory to non-isothermal situations. We also present an explicit model for the entropy functional for hard spheres.

  5. Economic Analyses in Anterior Cruciate Ligament Reconstruction: A Qualitative and Systematic Review.

    PubMed

    Saltzman, Bryan M; Cvetanovich, Gregory L; Nwachukwu, Benedict U; Mall, Nathan A; Bush-Joseph, Charles A; Bach, Bernard R

    2016-05-01

    As the health care system in the United States (US) transitions toward value-based care, there is an increased emphasis on understanding the cost drivers and high-value procedures within orthopaedics. To date, there has been no systematic review of the economic literature on anterior cruciate ligament reconstruction (ACLR). To evaluate the overall evidence base for economic studies published on ACLR in the orthopaedic literature. Data available on the economics of ACLR are summarized and cost drivers associated with the procedure are identified. Systematic review. All economic studies (including US-based and non-US-based) published between inception of the MEDLINE database and October 3, 2014, were identified. Given the heterogeneity of the existing evidence base, a qualitative, descriptive approach was used to assess the collective results from the economic studies on ACLR. When applicable, comparisons were made for the following cost-related variables associated with the procedure for economic implications: outpatient versus inpatient surgery (or outpatient vs overnight hospital stay vs >1-night stay); bone-patellar tendon-bone (BPTB) graft versus hamstring (HS) graft source; autograft versus allograft source; staged unilateral ACLR versus bilateral ACLR in a single setting; single- versus double-bundle technique; ACLR versus nonoperative treatment; and other unique comparisons reported in single studies, including computer-assisted navigation surgery (CANS) versus traditional surgery, early versus delayed ACLR, single- versus double-incision technique, and finally the costs of ACLR without comparison of variables. A total of 24 studies were identified and included; of these, 17 included studies were cost identification studies. The remaining 7 studies were cost utility analyses that used economic models to investigate the effect of variables such as the cost of allograft tissue, fixation devices, and physical therapy, the percentage and timing of revision surgery, and the cost of revision surgery. Of the 24 studies, there were 3 studies with level 1 evidence, 8 with level 2 evidence, 6 with level 3 evidence, and 7 with level 4 evidence. The following economic comparisons were demonstrated: (1) ACLR is more cost-effective than nonoperative treatment with rehabilitation only (per 3 cost utility analyses); (2) autograft use had lower total costs than allograft use, with operating room supply costs and allograft costs most significant (per 5 cost identification studies and 1 cost utility analysis); (3) results on hamstring versus BPTB graft source are conflicting (per 2 cost identification studies); (4) there is significant cost reduction with an outpatient versus inpatient setting (per 5 studies using cost identification analyses); (5) bilateral ACLR is more cost efficient than 2 unilateral ACLRs in separate settings (per 2 cost identification studies); (6) there are lower costs with similarly successful outcomes between single- and double-bundle technique (per 3 cost identification studies and 2 cost utility analyses). Results from this review suggest that early single-bundle, single (endoscopic)-incision outpatient ACLR using either BPTB or HS autograft provides the most value. In the setting of bilateral ACL rupture, single-setting bilateral ACLR is more cost-effective than staged unilateral ACLR. Procedures using CANS technology do not yet yield results that are superior to the results of a standard surgical procedure, and CANS has substantially greater costs. © 2015 The Author(s).

  6. Effects of pushing techniques in birth on mother and fetus: a randomized study.

    PubMed

    Yildirim, Gulay; Beji, Nezihe Kizilkaya

    2008-03-01

    The Valsalva pushing technique is used routinely in the second stage of labor in many countries, and it is accepted as standard obstetric management in Turkey. The purpose of this study was to determine the effects of pushing techniques on mother and fetus in birth in this setting. This randomized study was conducted between July 2003 and June 2004 in Bakirkoy Maternity and Children's Teaching Hospital in Istanbul, Turkey. One hundred low-risk primiparas between 38 and 42 weeks' gestation, who expected a spontaneous vaginal delivery, were randomized to either a spontaneous pushing group or a Valsalva-type pushing group. Spontaneous pushing women were informed during the first stage of labor about spontaneous pushing technique (open glottis pushing while breathing out) and were supported in pushing spontaneously in the second stage of labor. Similarly, Valsalva pushing women were informed during the first stage of labor about the Valsalva pushing technique (closed glottis pushing while holding their breath) and were supported in using Valsalva pushing in the second stage of labor. Perineal tears, postpartum hemorrhage, and hemoglobin levels were evaluated in mothers; and umbilical artery pH, Po(2) (mmHg), and Pco(2) (mmHg) levels and Apgar scores at 1 and 5 minutes were evaluated in newborns in both groups. No significant differences were found between the two groups in their demographics, incidence of nonreassuring fetal surveillance patterns, or use of oxytocin. The second stage of labor and duration of the expulsion phase were significantly longer with Valsalva-type pushing. Differences in the incidence of episiotomy, perineal tears, or postpartum hemorrhage were not significant between the groups. The baby fared better with spontaneous pushing, with higher 1- and 5-minute Apgar scores, and higher umbilical cord pH and Po(2) levels. After the birth, women expressed greater satisfaction with spontaneous pushing. Educating women about the spontaneous pushing technique in the first stage of labor and providing support for spontaneous pushing in the second stage result in a shorter second stage without interventions and in improved newborn outcomes. Women also stated that they pushed more effectively with the spontaneous pushing technique.

  7. Detector power linearity requirements and verification techniques for TMI direct detection receivers

    NASA Technical Reports Server (NTRS)

    Reinhardt, Victor S. (Inventor); Shih, Yi-Chi (Inventor); Toth, Paul A. (Inventor); Reynolds, Samuel C. (Inventor)

    1997-01-01

    A system (36, 98) for determining the linearity of an RF detector (46, 106). A first technique involves combining two RF signals from two stable local oscillators (38, 40) to form a modulated RF signal having a beat frequency, and applying the modulated RF signal to a detector (46) being tested. The output of the detector (46) is applied to a low frequency spectrum analyzer (48) such that a relationship between the power levels of the first and second harmonics generated by the detector (46) of the beat frequency of the modulated RF signal are measured by the spectrum analyzer (48) to determine the linearity of the detector (46). In a second technique, an RF signal from a local oscillator (100) is applied to a detector (106) being tested through a first attenuator (102) and a second attenuator (104). The output voltage of the detector (106) is measured when the first attenuator (102) is set to a particular attenuation value and the second attenuator (104) is switched between first and second attenuation values. Further, the output voltage of the detector (106) is measured when the first attenuator (102) is set to another attenuation value, and the second attenuator (104) is again switched between the first and second attenuation values. A relationship between the voltage outputs determines the linearity of the detector (106).

  8. Multiscale visual quality assessment for cluster analysis with self-organizing maps

    NASA Astrophysics Data System (ADS)

    Bernard, Jürgen; von Landesberger, Tatiana; Bremm, Sebastian; Schreck, Tobias

    2011-01-01

    Cluster analysis is an important data mining technique for analyzing large amounts of data, reducing many objects to a limited number of clusters. Cluster visualization techniques aim at supporting the user in better understanding the characteristics and relationships among the found clusters. While promising approaches to visual cluster analysis already exist, these usually fall short of incorporating the quality of the obtained clustering results. However, due to the nature of the clustering process, quality plays an important aspect, as for most practical data sets, typically many different clusterings are possible. Being aware of clustering quality is important to judge the expressiveness of a given cluster visualization, or to adjust the clustering process with refined parameters, among others. In this work, we present an encompassing suite of visual tools for quality assessment of an important visual cluster algorithm, namely, the Self-Organizing Map (SOM) technique. We define, measure, and visualize the notion of SOM cluster quality along a hierarchy of cluster abstractions. The quality abstractions range from simple scalar-valued quality scores up to the structural comparison of a given SOM clustering with output of additional supportive clustering methods. The suite of methods allows the user to assess the SOM quality on the appropriate abstraction level, and arrive at improved clustering results. We implement our tools in an integrated system, apply it on experimental data sets, and show its applicability.

  9. Matching motivation enhancement treatment to client motivation: re-examining the Project MATCH motivation matching hypothesis.

    PubMed

    Witkiewitz, Katie; Hartzler, Bryan; Donovan, Dennis

    2010-08-01

    The current study was designed to re-examine the motivation matching hypothesis from Project MATCH using growth mixture modeling, an analytical technique that models variation in individual drinking patterns. Secondary data analyses of data from Project MATCH (n = 1726), a large multi-site alcoholism treatment-matching study. Percentage of drinking days was the primary outcome measure, assessed from 1 month to 12 months following treatment. Treatment assignment, alcohol dependence symptoms and baseline percentage of drinking days were included as covariates. The results provided support for the motivation matching hypothesis in the out-patient sample and among females in the aftercare sample: the majority of individuals with lower baseline motivation had better outcomes if assigned to motivation enhancement treatment (MET) compared to those assigned to cognitive behavioral treatment (CBT). In the aftercare sample there was a moderating effect of gender and alcohol dependence severity, whereby males with lower baseline motivation and greater alcohol dependence drank more frequently if assigned to MET compared to those assigned to CBT. Results from the current study lend partial support to the motivation-matching hypothesis and also demonstrated the importance of moderating influences on treatment matching effectiveness. Based upon these findings, individuals with low baseline motivation in out-patient settings and males with low levels of alcohol dependence or females in aftercare settings may benefit more from motivational enhancement techniques than from cognitive-behavioral techniques.

  10. Measuring situation awareness in emergency settings: a systematic review of tools and outcomes

    PubMed Central

    Cooper, Simon; Porter, Joanne; Peach, Linda

    2014-01-01

    Background Nontechnical skills have an impact on health care outcomes and improve patient safety. Situation awareness is core with the view that an understanding of the environment will influence decision-making and performance. This paper reviews and describes indirect and direct measures of situation awareness applicable for emergency settings. Methods Electronic databases and search engines were searched from 1980 to 2010, including CINAHL, Ovid Medline, Pro-Quest, Cochrane, and the search engine, Google Scholar. Access strategies included keyword, author, and journal searches. Publications identified were assessed for relevance, and analyzed and synthesized using Oxford evidence levels and the Critical Appraisal Skills Programme guidelines in order to assess their quality and rigor. Results One hundred and thirteen papers were initially identified, and reduced to 55 following title and abstract review. The final selection included 14 papers drawn from the fields of emergency medicine, intensive care, anesthetics, and surgery. Ten of these discussed four general nontechnical skill measures (including situation awareness) and four incorporated the Situation Awareness Global Assessment Technique. Conclusion A range of direct and indirect techniques for measuring situation awareness is available. In the medical literature, indirect approaches are the most common, with situation awareness measured as part of a nontechnical skills assessment. In simulation-based studies, situation awareness in emergencies tends to be suboptimal, indicating the need for improved training techniques to enhance awareness and improve decision-making. PMID:27147872

  11. Challenging tumour immunological techniques that help to track cancer stem cells in malignant melanomas and other solid tumours.

    PubMed

    Kotlan, Beatrix; Plotar, Vanda; Eles, Klara; Horvath, Szabolcs; Balatoni, Timea; Csuka, Orsolya; Újhelyi, Mihaly; Sávolt, Ákos; Szollar, Andras; Vamosi-Nagy, Istvan; Toth, Laszlo; Farkas, Emil; Toth, Jozsef; Kasler, Miklos; Liszkay, Gabriella

    2018-03-01

    The arsenal of questions and answers about the minor cancer initiating cancer stem cell (CSC) population put responsible for cancer invasiveness and metastases, has left with an unsolved puzzle. Specific aims of a complex project were partly focused on revealing new biomarkers of cancer. We designed and set up novel techniques to facilitate the detection of cancerous cells. As a novel approach, we investigated B cells infiltrating breast carcinomas and melanomas (TIL-B) in terms of their tumour antigen binding potential. By developing the TIL-B phage display technology we provide here a new technology for the specific detection of highly tumour-associated antigens. Single chain Fv (scFv) antibody fragment phage ELISA, immunofluorescence (IF) FACS analysis, chamber slide technique with IF confocal laser microscopy and immunohistochemistry (IHC) in paraffin-embedded tissue sections were set up and standardized. We showed strong tumour-associated disialylated glycosphingolipid expression levels on various cancer cells using scFv antibody fragments, generated previously by uniquely invasive breast carcinoma TIL-B phage display library technology. We report herein a novel strategy to obtain antibody fragments of human origin that recognise tumour-associated ganglioside antigens. Our investigations have the power to detect privileged molecules in cancer progression, invasiveness, and metastases. The technical achievements of this study are being harnessed for early diagnostics and effective cancer therapeutics.

  12. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  13. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  14. Practical semen analysis: from A to Z

    PubMed Central

    Brazil, Charlene

    2010-01-01

    Accurate semen analysis is critical for decisions about patient care, as well as for studies addressing overall changes in semen quality, contraceptive efficacy and effects of toxicant exposure. The standardization of semen analysis is very difficult for many reasons, including the use of subjective techniques with no standards for comparison, poor technician training, problems with proficiency testing and a reluctance to change techniques. The World Health Organization (WHO) Semen handbook (2010) offers a vastly improved set of standardized procedures, all at a level of detail that will preclude most misinterpretations. However, there is a limit to what can be learned from words and pictures alone. A WHO-produced DVD that offers complete demonstrations of each technique along with quality assurance standards for motility, morphology and concentration assessments would enhance the effectiveness of the manual. However, neither the manual nor a DVD will help unless there is general acknowledgement of the critical need to standardize techniques and rigorously pursue quality control to ensure that laboratories actually perform techniques 'according to WHO' instead of merely reporting that they have done so. Unless improvements are made, patient results will continue to be compromised and comparison between studies and laboratories will have limited merit. PMID:20111076

  15. A model and nomogram to predict tumor site origin for squamous cell cancer confined to cervical lymph nodes.

    PubMed

    Ali, Arif N; Switchenko, Jeffrey M; Kim, Sungjin; Kowalski, Jeanne; El-Deiry, Mark W; Beitler, Jonathan J

    2014-11-15

    The current study was conducted to develop a multifactorial statistical model to predict the specific head and neck (H&N) tumor site origin in cases of squamous cell carcinoma confined to the cervical lymph nodes ("unknown primaries"). The Surveillance, Epidemiology, and End Results (SEER) database was analyzed for patients with an H&N tumor site who were diagnosed between 2004 and 2011. The SEER patients were identified according to their H&N primary tumor site and clinically positive cervical lymph node levels at the time of presentation. The SEER patient data set was randomly divided into 2 data sets for the purposes of internal split-sample validation. The effects of cervical lymph node levels, age, race, and sex on H&N primary tumor site were examined using univariate and multivariate analyses. Multivariate logistic regression models and an associated set of nomograms were developed based on relevant factors to provide probabilities of tumor site origin. Analysis of the SEER database identified 20,011 patients with H&N disease with both site-level and lymph node-level data. Sex, race, age, and lymph node levels were associated with primary H&N tumor site (nasopharynx, hypopharynx, oropharynx, and larynx) in the multivariate models. Internal validation techniques affirmed the accuracy of these models on separate data. The incorporation of epidemiologic and lymph node data into a predictive model has the potential to provide valuable guidance to clinicians in the treatment of patients with squamous cell carcinoma confined to the cervical lymph nodes. © 2014 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  16. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    PubMed

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  17. Curvature methods of damage detection using digital image correlation

    NASA Astrophysics Data System (ADS)

    Helfrick, Mark N.; Niezrecki, Christopher; Avitabile, Peter

    2009-03-01

    Analytical models have shown that local damage in a structure can be detected by studying changes in the curvature of the structure's displaced shape while under an applied load. In order for damage to be detected, located, and quantified using curvature methods, a spatially dense set of measurement points is required on the structure of interest and the change in curvature must be measurable. Experimental testing done to validate the theory is often plagued by sparse data sets and experimental noise. Furthermore, the type of load, the location and severity of the damage, and the mechanical properties (material and geometry) of the structure have a significant effect on how much the curvature will change. Within this paper, three-dimensional (3D) Digital Image Correlation (DIC) as one possible method for detecting damage through curvature methods is investigated. 3D DIC is a non-contacting full-field measurement technique which uses a stereo pair of digital cameras to capture surface shape. This approach allows for an extremely dense data set across the entire visible surface of an object. A test is performed to validate the approach on an aluminum cantilever beam. A dynamic load is applied to the beam which allows for measurements to be made of the beam's response at each of its first three resonant frequencies, corresponding to the first three bending modes of the structure. DIC measurements are used with damage detection algorithms to predict damage location with varying levels of damage inflicted in the form of a crack with a prescribed depth. The testing demonstrated that this technique will likely only work with structures where a large displaced shape is easily achieved and in cases where the damage is relatively severe. Practical applications and limitations of the technique are discussed.

  18. Visible Human 2.0--the next generation.

    PubMed

    Ratiu, Peter; Hillen, Berend; Glaser, Jack; Jenkins, Donald P

    2003-01-01

    The National Library of Medicine has initiated the development of new anatomical methods and techniques for the acquisition of higher resolution data sets, aiming to address the anatomical artifacts encountered in the development of the Visible Human Male and Female and to insure enhanced detection of structures, providing data in greater depth and breadth. Given this framework, we acquired a complete data set of the head and neck. CT and MR scans were also obtained with registration hardware inserted prior to imaging. The arterial and venous systems were injected with colorized araldite-F. After freezing, axial cryosectioning and digital photography at 147 microns/voxel resolution was performed. Two slabs of the specimen were acquired with a special tissue harvesting technique. The resulting tissue slices of the whole specimen were stained for different tissue types. The resulting histological material was then scanned at a 60x magnification using the Virtual Slice technology at 2 microns/pixel resolution (each slide approximately 75,000 x 100,000 pixels). In this data set, for the first time anatomy is presented as a continuum from a radiologic granularity of 1 mm/voxel, to a macroscopic resolution of .147 mm/voxel, to microscopic resolution of 2 microns/pixel. The hiatus between gross anatomy and histology has been assumed insurmountable, and until the present time this gap was bridged by extrapolating findings on minute samples. The availability of anatomical data with the fidelity presented will render it possible to perform a seamless study of whole organs at a cellular level and provide a testbed for the validation of histological estimation techniques. A future complete Visible Human created from data acquired at a cellular resolution, aside from its daunting size, will open new possibilities in multiple directions in medical research and simulation.

  19. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    PubMed Central

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-01-01

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies. PMID:26151207

  20. Rapid detection of in vitro antituberculous drug resistance among smear-positive respiratory samples using microcolony detection-based direct drug susceptibility testing method.

    PubMed

    Iftikhar, Irim; Irfan, Seema; Farooqi, Joveria; Azizullah, Zahida; Hasan, Rumina

    2017-01-01

    With the rise in multidrug-resistant tuberculosis, there is a search for newer techniques that will rapidly detect drug-resistant Mycobacterium tuberculosis. Although molecular techniques can detect resistance, culture is still considered gold standard, especially in resource-limited settings where quick, cheap, and easy techniques are needed. The aim of the study was to evaluate microcolony method thin layer agar (TLA) for quick detection of resistance against the first- and second-line antituberculous drugs in clinical isolates. This was a cross-sectional study performed at Aga Khan University Hospital. A total of 87 Z-N stain smear-positive pulmonary samples were received and indirect drug susceptibility test (ID-DST) was performed using Lowenstein-Jensen and mycobacteria growth indicator tube. Direct DST was performed using TLA on 7H10 agar. TLA was observed twice weekly under microscope for 4 weeks. Sensitivity, specificity, and accuracy were calculated for TLA using indirect susceptibility method as the gold standard. Level of agreement was calculated using Kappa score. TLA showed sensitivity of 89% and 95.2% for isoniazid and rifampicin, while for ethionamide, ofloxacin, and injectable aminoglycosides, it was 96.6%, 92.1%, and 100%, respectively. Specificity for the first-line drugs was >95% while second-line drugs ranged from 70% to 100%. Mean time to positivity was 10.2 days by TLA as compared to 43.1 days by ID-DST. TLA is a quick and reliable method in identifying resistance, especially in resource-limited settings. However, additional liquid culture can be set up as backup, especially in patients on therapy to avoid false negative results.

  1. Binding SNOMED CT terms to archetype elements. Establishing a baseline of results.

    PubMed

    Berges, I; Bermudez, J; Illarramendi, A

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". The proliferation of archetypes as a means to represent information of Electronic Health Records has raised the need of binding terminological codes - such as SNOMED CT codes - to their elements, in order to identify them univocally. However, the large size of the terminologies makes it difficult to perform this task manually. To establish a baseline of results for the aforementioned problem by using off-the-shelf string comparison-based techniques against which results from more complex techniques could be evaluated. Nine Typed Comparison METHODS were evaluated for binding using a set of 487 archetype elements. Their recall was calculated and Friedman and Nemenyi tests were applied in order to assess whether any of the methods outperformed the others. Using the qGrams method along with the 'Text' information piece of archetype elements outperforms the other methods if a level of confidence of 90% is considered. A recall of 25.26% is obtained if just one SNOMED CT term is retrieved for each archetype element. This recall rises to 50.51% and 75.56% if 10 and 100 elements are retrieved respectively, that being a reduction of more than 99.99% on the SNOMED CT code set. The baseline has been established following the above-mentioned results. Moreover, it has been observed that although string comparison-based methods do not outperform more sophisticated techniques, they still can be an alternative for providing a reduced set of candidate terms for each archetype element from which the ultimate term can be chosen later in the more-than-likely manual supervision task.

  2. Contextualising Water Use in Residential Settings: A Survey of Non-Intrusive Techniques and Approaches

    PubMed Central

    Carboni, Davide; Gluhak, Alex; McCann, Julie A.; Beach, Thomas H.

    2016-01-01

    Water monitoring in households is important to ensure the sustainability of fresh water reserves on our planet. It provides stakeholders with the statistics required to formulate optimal strategies in residential water management. However, this should not be prohibitive and appliance-level water monitoring cannot practically be achieved by deploying sensors on every faucet or water-consuming device of interest due to the higher hardware costs and complexity, not to mention the risk of accidental leakages that can derive from the extra plumbing needed. Machine learning and data mining techniques are promising techniques to analyse monitored data to obtain non-intrusive water usage disaggregation. This is because they can discern water usage from the aggregated data acquired from a single point of observation. This paper provides an overview of water usage disaggregation systems and related techniques adopted for water event classification. The state-of-the art of algorithms and testbeds used for fixture recognition are reviewed and a discussion on the prominent challenges and future research are also included. PMID:27213397

  3. Interface modeling in incompressible media using level sets in Escript

    NASA Astrophysics Data System (ADS)

    Gross, L.; Bourgouin, L.; Hale, A. J.; Mühlhaus, H.-B.

    2007-08-01

    We use a finite element (FEM) formulation of the level set method to model geological fluid flow problems involving interface propagation. Interface problems are ubiquitous in geophysics. Here we focus on a Rayleigh-Taylor instability, namely mantel plumes evolution, and the growth of lava domes. Both problems require the accurate description of the propagation of an interface between heavy and light materials (plume) or between high viscous lava and low viscous air (lava dome), respectively. The implementation of the models is based on Escript which is a Python module for the solution of partial differential equations (PDEs) using spatial discretization techniques such as FEM. It is designed to describe numerical models in the language of PDEs while using computational components implemented in C and C++ to achieve high performance for time-intensive, numerical calculations. A critical step in the solution geological flow problems is the solution of the velocity-pressure problem. We describe how the Escript module can be used for a high-level implementation of an efficient variant of the well-known Uzawa scheme. We begin with a brief outline of the Escript modules and then present illustrations of its usage for the numerical solutions of the problems mentioned above.

  4. Communicability in consultation/liaison psychiatry: patient treatment and patient care.

    PubMed

    Rasmussen, G; Mogstad, T E

    1983-01-01

    Effective communication seems to necessitate a very conscious (supra-) professional attitude at the hand of the liaison person, so that she or he continuously keeps in mind the possibilities and limitations in terms of communicability-no matter rank or level of consumers. From the point of view of communicability in general, one will need: (a) a body of knowledge and techniques relevant to the needs and purposes of the consumers; (b) a varied set of understandable languages in terms of frames of reference and definitions of concepts, adjusted to the various levels of complexity needed, (c) liaison person attitudes coherent to the chosen frames of reference, (d) a realistic view of the possibilities in different liaison set-ups, and (e) a conscious didactic strategy based on considerations of the aims of one's function. Among the above listed factors we shall especially focus on our 'erroneous zones' and failures in our endeavors to forward psychosomatic ideas and attitudes, and some implications of patient care and nursing concretely in connection with some cases submitted to minor surgical treatment, demonstrating the importance of practical psychosomatic impact at the level of patient nursing and care.

  5. Stacked sparse autoencoder in hyperspectral data classification using spectral-spatial, higher order statistics and multifractal spectrum features

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu

    2017-11-01

    This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.

  6. Visualization of multiple influences on ocellar flight control in giant honeybees with the data-mining tool Viscovery SOMine.

    PubMed

    Kastberger, G; Kranner, G

    2000-02-01

    Viscovery SOMine is a software tool for advanced analysis and monitoring of numerical data sets. It was developed for professional use in business, industry, and science and to support dependency analysis, deviation detection, unsupervised clustering, nonlinear regression, data association, pattern recognition, and animated monitoring. Based on the concept of self-organizing maps (SOMs), it employs a robust variant of unsupervised neural networks--namely, Kohonen's Batch-SOM, which is further enhanced with a new scaling technique for speeding up the learning process. This tool provides a powerful means by which to analyze complex data sets without prior statistical knowledge. The data representation contained in the trained SOM is systematically converted to be used in a spectrum of visualization techniques, such as evaluating dependencies between components, investigating geometric properties of the data distribution, searching for clusters, or monitoring new data. We have used this software tool to analyze and visualize multiple influences of the ocellar system on free-flight behavior in giant honeybees. Occlusion of ocelli will affect orienting reactivities in relation to flight target, level of disturbance, and position of the bee in the flight chamber; it will induce phototaxis and make orienting imprecise and dependent on motivational settings. Ocelli permit the adjustment of orienting strategies to environmental demands by enforcing abilities such as centering or flight kinetics and by providing independent control of posture and flight course.

  7. Fourier transform methods in local gravity modeling

    NASA Technical Reports Server (NTRS)

    Harrison, J. C.; Dickinson, M.

    1989-01-01

    New algorithms were derived for computing terrain corrections, all components of the attraction of the topography at the topographic surface and the gradients of these attractions. These algoriithms utilize fast Fourier transforms, but, in contrast to methods currently in use, all divergences of the integrals are removed during the analysis. Sequential methods employing a smooth intermediate reference surface were developed to avoid the very large transforms necessary when making computations at high resolution over a wide area. A new method for the numerical solution of Molodensky's problem was developed to mitigate the convergence difficulties that occur at short wavelengths with methods based on a Taylor series expansion. A trial field on a level surface is continued analytically to the topographic surface, and compared with that predicted from gravity observations. The difference is used to compute a correction to the trial field and the process iterated. Special techniques are employed to speed convergence and prevent oscillations. Three different spectral methods for fitting a point-mass set to a gravity field given on a regular grid at constant elevation are described. Two of the methods differ in the way that the spectrum of the point-mass set, which extends to infinite wave number, is matched to that of the gravity field which is band-limited. The third method is essentially a space-domain technique in which Fourier methods are used to solve a set of simultaneous equations.

  8. Profiling cellular protein complexes by proximity ligation with dual tag microarray readout.

    PubMed

    Hammond, Maria; Nong, Rachel Yuan; Ericsson, Olle; Pardali, Katerina; Landegren, Ulf

    2012-01-01

    Patterns of protein interactions provide important insights in basic biology, and their analysis plays an increasing role in drug development and diagnostics of disease. We have established a scalable technique to compare two biological samples for the levels of all pairwise interactions among a set of targeted protein molecules. The technique is a combination of the proximity ligation assay with readout via dual tag microarrays. In the proximity ligation assay protein identities are encoded as DNA sequences by attaching DNA oligonucleotides to antibodies directed against the proteins of interest. Upon binding by pairs of antibodies to proteins present in the same molecular complexes, ligation reactions give rise to reporter DNA molecules that contain the combined sequence information from the two DNA strands. The ligation reactions also serve to incorporate a sample barcode in the reporter molecules to allow for direct comparison between pairs of samples. The samples are evaluated using a dual tag microarray where information is decoded, revealing which pairs of tags that have become joined. As a proof-of-concept we demonstrate that this approach can be used to detect a set of five proteins and their pairwise interactions both in cellular lysates and in fixed tissue culture cells. This paper provides a general strategy to analyze the extent of any pairwise interactions in large sets of molecules by decoding reporter DNA strands that identify the interacting molecules.

  9. Developing an objective evaluation method to estimate diabetes risk in community-based settings.

    PubMed

    Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P

    2011-05-01

    Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.

  10. Computed tomography automatic exposure control techniques in 18F-FDG oncology PET-CT scanning.

    PubMed

    Iball, Gareth R; Tout, Deborah

    2014-04-01

    Computed tomography (CT) automatic exposure control (AEC) systems are now used in all modern PET-CT scanners. A collaborative study was undertaken to compare AEC techniques of the three major PET-CT manufacturers for fluorine-18 fluorodeoxyglucose half-body oncology imaging. An audit of 70 patients was performed for half-body CT scans taken on a GE Discovery 690, Philips Gemini TF and Siemens Biograph mCT (all 64-slice CT). Patient demographic and dose information was recorded and image noise was calculated as the SD of Hounsfield units in the liver. A direct comparison of the AEC systems was made by scanning a Rando phantom on all three systems for a range of AEC settings. The variation in dose and image quality with patient weight was significantly different for all three systems, with the GE system showing the largest variation in dose with weight and Philips the least. Image noise varied with patient weight in Philips and Siemens systems but was constant for all weights in GE. The z-axis mA profiles from the Rando phantom demonstrate that these differences are caused by the nature of the tube current modulation techniques applied. The mA profiles varied considerably according to the AEC settings used. CT AEC techniques from the three manufacturers yield significantly different tube current modulation patterns and hence deliver different doses and levels of image quality across a range of patient weights. Users should be aware of how their system works and of steps that could be taken to optimize imaging protocols.

  11. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    PubMed

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Quality of Care and Job Satisfaction in the European Home Care Setting: Research Protocol

    PubMed Central

    van der Roest, Henriëtte; van Hout, Hein; Declercq, Anja

    2016-01-01

    Introduction: Since the European population is ageing, a growing number of elderly will need home care. Consequently, high quality home care for the elderly remains an important challenge. Job satisfaction among care professionals is regarded as an important aspect of the quality of home care. Aim: This paper describes a research protocol to identify elements that have an impact on job satisfaction among care professionals and on quality of care for older people in the home care setting of six European countries. Methods: Data on elements at the macro-level (policy), meso-level (care organisations) and micro-level (clients) are of importance in determining job satisfaction and quality of care. Macro-level indicators will be identified in a previously published literature review. At meso- and micro-level, data will be collected by means of two questionnaires utilsed with both care organisations and care professionals, and by means of interRAI Home Care assessments of clients. The client assessments will be used to calculate quality of care indicators. Subsequently, data will be analysed by means of linear and stepwise multiple regression analyses, correlations and multilevel techniques. Conclusions and Discussion: These results can guide health care policy makers in their decision making process in order to increase the quality of home care in their organisation, in their country or in Europe. PMID:28435423

  13. Economic impact of angioplasty salvage techniques, with an emphasis on coronary stents: a method incorporating costs, revenues, clinical effectiveness and payer mix.

    PubMed

    Vaitkus, P T; Witmer, W T; Brandenburg, R G; Wells, S K; Zehnacker, J B

    1997-10-01

    We sought to broaden assessment of the economic impact of percutaneous transluminal coronary angioplasty (PTCA) revascularization salvage strategies by taking into account costs, revenues, the off-setting effects of prevented clinical complications and the effects of payer mix. Previous economic analyses of PTCA have focused on the direct costs of treatment but have not accounted either for associated revenues or for the ability of costly salvage techniques such as coronary stenting to reduce even costlier complications. Procedural costs, revenues and contribution margins (i.e., "profit") were measured for 765 consecutive PTCA cases to assess the economic impact of salvage techniques (prolonged heparin administration, thrombolysis, intracoronary stenting or use of perfusion balloon catheters) and clinical complications (myocardial infarction, coronary artery bypass graft surgery [CABG] or acute vessel closure with repeat PTCA). To assess the economic impact of various salvage techniques for failed PTCA, we used actual 1995 financial data as well as models of various mixes of fee-for-service, diagnosis-related group (DRG) and capitated payers. Under fee-for-service arrangements, most salvage techniques were profitable for the hospital. Stents were profitable at almost any level of clinical effectiveness. Under DRG-based systems, most salvage techniques such as stenting produced a financial loss to the hospital because one complication (CABG) remained profitable. Under capitated arrangements, stenting and other salvage modalities were profitable only if they were clinically effective in preventing complications in > 50% of cases in which they were used. The economic impact of PTCA salvage techniques depends on their clinical effectiveness, costs and revenues. In reimbursement systems dominated by DRG payers, salvage techniques are not rewarded, whereas complications are. Under capitated systems, the level of clinical effectiveness needed to achieve cost savings is probably not achievable in current practice. Further studies are needed to define equitable reimbursement schedules that will promote clinically effective practice.

  14. Assessment of computer techniques for processing digital LANDSAT MSS data for lithological discrimination of Serra do Ramalho, State of Bahia

    NASA Technical Reports Server (NTRS)

    Paradella, W. R. (Principal Investigator); Vitorello, I.; Monteiro, M. D.

    1984-01-01

    Enhancement techniques and thematic classifications were applied to the metasediments of Bambui Super Group (Upper Proterozoic) in the Region of Serra do Ramalho, SW of the state of Bahia. Linear contrast stretch, band-ratios with contrast stretch, and color-composites allow lithological discriminations. The effects of human activities and of vegetation cover mask and limit, in several ways, the lithological discrimination with digital MSS data. Principal component images and color composite of linear contrast stretch of these products, show lithological discrimination through tonal gradations. This set of products allows the delineations of several metasedimentary sequences to a level superior to reconnaissance mapping. Supervised (maximum likelihood classifier) and nonsupervised (K-Means classifier) classification of the limestone sequence, host to fluorite mineralization show satisfactory results.

  15. Using an embedded reality approach to improve test reliability for NHPT tasks.

    PubMed

    Bowler, M; Amirabdollahian, F; Dautenhahn, K

    2011-01-01

    Research into the use of haptic and virtual reality technologies has increased greatly over the past decade, in terms of both quality and quantity. Methods to utilise haptic and virtual technologies with currently existing techniques for assessing impairment are underway, and, due to the commercially available equipment, has found some success in the use of these methods for individuals who suffer upper limb impairment. This paper uses the clinically validated assessment technique for measuring motor impairment: the Nine Hole Peg Test and creates three tasks with different levels of realism. The efficacy of these tasks is discussed with particular attention paid to analysis in terms of removing factors that limit a virtual environment's use in a clinical setting, such as inter-subject variation. © 2011 IEEE

  16. The composite sequential clustering technique for analysis of multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Su, M. Y.

    1972-01-01

    The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.

  17. Performability evaluation of the SIFT computer

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.

    1979-01-01

    Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed.

  18. Lifetime measurements using two-step laser excitation for high-lying even-parity levels and improved theoretical oscillator strengths in Y II

    NASA Astrophysics Data System (ADS)

    Palmeri, P.; Quinet, P.; Lundberg, H.; Engström, L.; Nilsson, H.; Hartman, H.

    2017-10-01

    We report new time-resolved laser-induced fluorescence lifetime measurements for 22 highly excited even-parity levels in singly ionized yttrium (Y II). To populate these levels belonging to the configurations 4d6s, 5s6s 4d5d, 5p2, 4d7s and 4d6d, a two-step laser excitation technique was used. Our previous pseudo-relativistic Hartree-Fock model (Biémont et al. 2011) was improved by extending the configuration interaction up to n = 10 to reproduce the new experimental lifetimes. A set of semi-empirical oscillator strengths extended to transitions falling in the spectral range λλ194-3995 nm, depopulating these 22 even-parity levels in Y II, is presented and compared to the values found in the Kurucz's data base (Kurucz 2011).

  19. Dentascan – Is the Investment Worth the Hype ???

    PubMed Central

    Shah, Monali A; Shah, Sneha S; Dave, Deepak

    2013-01-01

    Background: Open Bone Measurement (OBM) and Bone Sounding (BS) are most reliable but invasive clinical methods for Alveolar Bone Level (ABL) assessment, causing discomfort to the patient. Routinely, IOPAs & OPGs are the commonest radiographic techniques used, which tend to underestimate bone loss and obscure buccal/lingual defects. Novel technique like dentascan (CBCT) eliminates this limitation by giving images in 3 planes – sagittal, coronal and axial. Aim: To compare & correlate non-invasive 3D radiographic technique of Dentascan with BS & OBM, and IOPA and OPG, in assessing the ABL. Settings and Design: Cross-sectional diagnostic study. Material and Methods: Two hundred and five sites were subjected to clinical and radiographic diagnostic techniques. Relative distance between the alveolar bone crest and reference wire was measured. All the measurements were compared and tested against the OBM. Statistical Analysis: Student’s t-test, ANOVA, Pearson correlation coefficient. Results: There is statistically significant difference between dentascan and OBM, only BS showed agreement with OBM (p < 0.05). Dentascan weakly correlated with OBM & BS lingually.Rest all techniques showed statistically significant difference between them (p= 0.00). Conclusion: Within the limitations of this study, only BS seems to be comparable with OBM with no superior result of Dentascan over the conventional techniques, except for lingual measurements. PMID:24551722

  20. Use of communication techniques by Maryland dentists.

    PubMed

    Maybury, Catherine; Horowitz, Alice M; Wang, Min Qi; Kleinman, Dushanka V

    2013-12-01

    Health care providers' use of recommended communication techniques can increase patients' adherence to prevention and treatment regimens and improve patient health outcomes. The authors conducted a survey of Maryland dentists to determine the number and type of communication techniques they use on a routine basis. The authors mailed a 30-item questionnaire to a random sample of 1,393 general practice dentists and all 169 members of the Maryland chapter of the American Academy of Pediatric Dentistry. The overall response rate was 38.4 percent. Analysis included descriptive statistics, analysis of variance and ordinary least squares regression analysis to examine the association of dentists' characteristics with the number of communication techniques used. They set the significance level at P < .05. General dentists reported routinely using a mean of 7.9 of the 18 communication techniques and 3.6 of the seven basic techniques, whereas pediatric dentists reported using a mean of 8.4 and 3.8 of those techniques, respectively. General dentists who had taken a communication course outside of dental school were more likely than those who had not to use the 18 techniques (P < .01) but not the seven basic techniques (P < .05). Pediatric dentists who had taken a communication course outside of dental school were more likely than those who had not to use the 18 techniques (P < .05) and the seven basic techniques (P < .01). The number of communication techniques that dentists used routinely varied across the 18 techniques and was low for most techniques. Practical Implications. Professional education is needed both in dental school curricula and continuing education courses to increase use of recommended communication techniques. Specifically, dentists and their team members should consider taking communication skills courses and conducting an overall evaluation of their practices for user friendliness.

  1. Gamification for health promotion: systematic review of behaviour change techniques in smartphone apps

    PubMed Central

    Lumsden, J; Rivas, C; Steed, L; Thiyagarajan, A; Sohanpal, R; Caton, H; Griffiths, C J; Munafò, M R; Taylor, S; Walton, R T

    2016-01-01

    Objective Smartphone games that aim to alter health behaviours are common, but there is uncertainty about how to achieve this. We systematically reviewed health apps containing gaming elements analysing their embedded behaviour change techniques. Methods Two trained researchers independently coded apps for behaviour change techniques using a standard taxonomy. We explored associations with user ratings and price. Data sources We screened the National Health Service (NHS) Health Apps Library and all top-rated medical, health and wellness and health and fitness apps (defined by Apple and Google Play stores based on revenue and downloads). We included free and paid English language apps using ‘gamification’ (rewards, prizes, avatars, badges, leaderboards, competitions, levelling-up or health-related challenges). We excluded apps targeting health professionals. Results 64 of 1680 (4%) health apps included gamification and met inclusion criteria; only 3 of these were in the NHS Library. Behaviour change categories used were: feedback and monitoring (n=60, 94% of apps), reward and threat (n=52, 81%), and goals and planning (n=52, 81%). Individual techniques were: self-monitoring of behaviour (n=55, 86%), non-specific reward (n=49, 82%), social support unspecified (n=48, 75%), non-specific incentive (n=49, 82%) and focus on past success (n=47, 73%). Median number of techniques per app was 14 (range: 5–22). Common combinations were: goal setting, self-monitoring, non-specific reward and non-specific incentive (n=35, 55%); goal setting, self-monitoring and focus on past success (n=33, 52%). There was no correlation between number of techniques and user ratings (p=0.07; rs=0.23) or price (p=0.45; rs=0.10). Conclusions Few health apps currently employ gamification and there is a wide variation in the use of behaviour change techniques, which may limit potential to improve health outcomes. We found no correlation between user rating (a possible proxy for health benefits) and game content or price. Further research is required to evaluate effective behaviour change techniques and to assess clinical outcomes. Trial registration number CRD42015029841. PMID:27707829

  2. Gamification for health promotion: systematic review of behaviour change techniques in smartphone apps.

    PubMed

    Edwards, E A; Lumsden, J; Rivas, C; Steed, L; Edwards, L A; Thiyagarajan, A; Sohanpal, R; Caton, H; Griffiths, C J; Munafò, M R; Taylor, S; Walton, R T

    2016-10-04

    Smartphone games that aim to alter health behaviours are common, but there is uncertainty about how to achieve this. We systematically reviewed health apps containing gaming elements analysing their embedded behaviour change techniques. Two trained researchers independently coded apps for behaviour change techniques using a standard taxonomy. We explored associations with user ratings and price. We screened the National Health Service (NHS) Health Apps Library and all top-rated medical, health and wellness and health and fitness apps (defined by Apple and Google Play stores based on revenue and downloads). We included free and paid English language apps using 'gamification' (rewards, prizes, avatars, badges, leaderboards, competitions, levelling-up or health-related challenges). We excluded apps targeting health professionals. 64 of 1680 (4%) health apps included gamification and met inclusion criteria; only 3 of these were in the NHS Library. Behaviour change categories used were: feedback and monitoring (n=60, 94% of apps), reward and threat (n=52, 81%), and goals and planning (n=52, 81%). Individual techniques were: self-monitoring of behaviour (n=55, 86%), non-specific reward (n=49, 82%), social support unspecified (n=48, 75%), non-specific incentive (n=49, 82%) and focus on past success (n=47, 73%). Median number of techniques per app was 14 (range: 5-22). Common combinations were: goal setting, self-monitoring, non-specific reward and non-specific incentive (n=35, 55%); goal setting, self-monitoring and focus on past success (n=33, 52%). There was no correlation between number of techniques and user ratings (p=0.07; r s =0.23) or price (p=0.45; r s =0.10). Few health apps currently employ gamification and there is a wide variation in the use of behaviour change techniques, which may limit potential to improve health outcomes. We found no correlation between user rating (a possible proxy for health benefits) and game content or price. Further research is required to evaluate effective behaviour change techniques and to assess clinical outcomes. CRD42015029841. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. Combining geographic information system, multicriteria evaluation techniques and fuzzy logic in siting MSW landfills

    NASA Astrophysics Data System (ADS)

    Gemitzi, Alexandra; Tsihrintzis, Vassilios A.; Voudrias, Evangelos; Petalas, Christos; Stravodimos, George

    2007-01-01

    This study presents a methodology for siting municipal solid waste landfills, coupling geographic information systems (GIS), fuzzy logic, and multicriteria evaluation techniques. Both exclusionary and non-exclusionary criteria are used. Factors, i.e., non-exclusionary criteria, are divided in two distinct groups which do not have the same level of trade off. The first group comprises factors related to the physical environment, which cannot be expressed in terms of monetary cost and, therefore, they do not easily trade off. The second group includes those factors related to human activities, i.e., socioeconomic factors, which can be expressed as financial cost, thus showing a high level of trade off. GIS are used for geographic data acquisition and processing. The analytical hierarchy process (AHP) is the multicriteria evaluation technique used, enhanced with fuzzy factor standardization. Besides assigning weights to factors through the AHP, control over the level of risk and trade off in the siting process is achieved through a second set of weights, i.e., order weights, applied to factors in each factor group, on a pixel-by-pixel basis, thus taking into account the local site characteristics. The method has been applied to Evros prefecture (NE Greece), an area of approximately 4,000 km2. The siting methodology results in two intermediate suitability maps, one related to environmental and the other to socioeconomic criteria. Combination of the two intermediate maps results in the final composite suitability map for landfill siting.

  4. Applying the algorithm "assessing quality using image registration circuits" (AQUIRC) to multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Datteri, Ryan; Asman, Andrew J.; Landman, Bennett A.; Dawant, Benoit M.

    2014-03-01

    Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.

  5. A comparison of image processing techniques for bird recognition.

    PubMed

    Nadimpalli, Uma D; Price, Randy R; Hall, Steven G; Bomma, Pallavi

    2006-01-01

    Bird predation is one of the major concerns for fish culture in open ponds. A novel method for dispersing birds is the use of autonomous vehicles. Image recognition software can improve their efficiency. Several image processing techniques for recognition of birds have been tested. A series of morphological operations were implemented. We divided images into 3 types, Type 1, Type 2, and Type 3, based on the level of difficulty of recognizing birds. Type 1 images were clear; Type 2 images were medium clear, and Type 3 images were unclear. Local thresholding has been implemented using HSV (Hue, Saturation, and Value), GRAY, and RGB (Red, Green, and Blue) color models on all three sections of images and results were tabulated. Template matching using normal correlation and artificial neural networks (ANN) are the other methods that have been developed in this study in addition to image morphology. Template matching produced satisfactory results irrespective of the difficulty level of images, but artificial neural networks produced accuracies of 100, 60, and 50% on Type 1, Type 2, and Type 3 images, respectively. Correct classification rate can be increased by further training. Future research will focus on testing the recognition algorithms in natural or aquacultural settings on autonomous boats. Applications of such techniques to industrial, agricultural, or related areas are additional future possibilities.

  6. Predictive Detection of Tuberculosis using Electronic Nose Technology

    NASA Astrophysics Data System (ADS)

    Gibson, Tim; Kolk, Arend; Reither, Klaus; Kuipers, Sjoukje; Hallam, Viv; Chandler, Rob; Dutta, Ritaban; Maboko, Leonard; Jung, Jutta; Klatser, Paul

    2009-05-01

    The adaptation and use of a Bloodhound® ST214 electronic nose to rapidly detect TB in sputum samples has been discussed in the past, with some promising results being obtained in 2007. Some of the specific VOC's associated with Mycobacteria tuberculosis organisms are now being discovered and a paper was published in 2008, but the method of predicting the presence of TB in sputum samples using the VOC biomarkers has yet to be fully optimised. Nevertheless, with emphasis on the sampling techniques and with new data processing techniques to obtain consistent results progress is being made Sensitivity and specificity levels for field detection of TB have been set by WHO at a minimum level of 85% and 95% respectively, and the e-nose technique is working towards these figures. In a series of experiments carried out in Mbeya, Tanzania, Africa, data from a full 5 days of sampling was combined giving a total of 248 sputum samples analysed. From the data obtained we can report results that show specificities and sensitivities in the 70-80% region when actually predicting the presence of TB in unknown sputum samples. The results are a further step forward in the rapid detection of TB in the clinics in developing countries and show continued promise for future development of an optimised instrument for TB prediction.

  7. A novel two-stage evaluation system based on a Group-G1 approach to identify appropriate emergency treatment technology schemes in sudden water source pollution accidents.

    PubMed

    Qu, Jianhua; Meng, Xianlin; Hu, Qi; You, Hong

    2016-02-01

    Sudden water source pollution resulting from hazardous materials has gradually become a major threat to the safety of the urban water supply. Over the past years, various treatment techniques have been proposed for the removal of the pollutants to minimize the threat of such pollutions. Given the diversity of techniques available, the current challenge is how to scientifically select the most desirable alternative for different threat degrees. Therefore, a novel two-stage evaluation system was developed based on a circulation-correction improved Group-G1 method to determine the optimal emergency treatment technology scheme, considering the areas of contaminant elimination in both drinking water sources and water treatment plants. In stage 1, the threat degree caused by the pollution was predicted using a threat evaluation index system and was subdivided into four levels. Then, a technique evaluation index system containing four sets of criteria weights was constructed in stage 2 to obtain the optimum treatment schemes corresponding to the different threat levels. The applicability of the established evaluation system was tested by a practical cadmium-contaminated accident that occurred in 2012. The results show this system capable of facilitating scientific analysis in the evaluation and selection of emergency treatment technologies for drinking water source security.

  8. A Proposed Data Fusion Architecture for Micro-Zone Analysis and Data Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevin McCarthy; Milos Manic

    Data Fusion requires the ability to combine or “fuse” date from multiple data sources. Time Series Analysis is a data mining technique used to predict future values from a data set based upon past values. Unlike other data mining techniques, however, Time Series places special emphasis on periodicity and how seasonal and other time-based factors tend to affect trends over time. One of the difficulties encountered in developing generic time series techniques is the wide variability of the data sets available for analysis. This presents challenges all the way from the data gathering stage to results presentation. This paper presentsmore » an architecture designed and used to facilitate the collection of disparate data sets well suited to Time Series analysis as well as other predictive data mining techniques. Results show this architecture provides a flexible, dynamic framework for the capture and storage of a myriad of dissimilar data sets and can serve as a foundation from which to build a complete data fusion architecture.« less

  9. Designing application software in wide area network settings

    NASA Technical Reports Server (NTRS)

    Makpangou, Mesaac; Birman, Ken

    1990-01-01

    Progress in methodologies for developing robust local area network software has not been matched by similar results for wide area settings. The design of application software spanning multiple local area environments is examined. For important classes of applications, simple design techniques are presented that yield fault tolerant wide area programs. An implementation of these techniques as a set of tools for use within the ISIS system is described.

  10. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  11. A Graphic Overlay Method for Selection of Osteotomy Site in Chronic Radial Head Dislocation: An Evaluation of 3D-printed Bone Models.

    PubMed

    Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young

    2017-03-01

    Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.

  12. Endoscopic resection of subepithelial tumors

    PubMed Central

    Schmidt, Arthur; Bauder, Markus; Riecken, Bettina; Caca, Karel

    2014-01-01

    Management of subepithelial tumors (SETs) remains challenging. Endoscopic ultrasound (EUS) has improved differential diagnosis of these tumors but a definitive diagnosis on EUS findings alone can be achieved in the minority of cases. Complete endoscopic resection may provide a reasonable approach for tissue acquisition and may also be therapeutic in case of malignant lesions. Small SET restricted to the submucosa can be removed with established basic resection techniques. However, resection of SET arising from deeper layers of the gastrointestinal wall requires advanced endoscopic methods and harbours the risk of perforation. Innovative techniques such as submucosal tunneling and full thickness resection have expanded the frontiers of endoscopic therapy in the past years. This review will give an overview about endoscopic resection techniques of SET with a focus on novel methods. PMID:25512768

  13. Solving Large Problems Quickly: Progress in 2001-2003

    NASA Technical Reports Server (NTRS)

    Mowry, Todd C.; Colohan, Christopher B.; Brown, Angela Demke; Steffan, J. Gregory; Zhai, Antonia

    2004-01-01

    This document describes the progress we have made and the lessons we have learned in 2001 through 2003 under the NASA grant entitled "Solving Important Problems Faster". The long-term goal of this research is to accelerate large, irregular scientific applications which have enormous data sets and which are difficult to parallelize. To accomplish this goal, we are exploring two complementary techniques: (i) using compiler-inserted prefetching to automatically hide the I/O latency of accessing these large data sets from disk; and (ii) using thread-level data speculation to enable the optimistic parallelization of applications despite uncertainty as to whether data dependences exist between the resulting threads which would normally make them unsafe to execute in parallel. Overall, we made significant progress in 2001 through 2003, and the project has gone well.

  14. Building an ACT-R Reader for Eye-Tracking Corpus Data.

    PubMed

    Dotlačil, Jakub

    2018-01-01

    Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works (most prominently, Engelmann, Vasishth, Engbert & Kliegl, ), the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo (MCMC) techniques, rather than by relying on the mix of manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. Copyright © 2017 Cognitive Science Society, Inc.

  15. Study of the Gray Scale, Polychromatic, Distortion Invariant Neural Networks Using the Ipa Model.

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw

    Research in the optical neural network field is primarily motivated by the fact that humans recognize objects better than the conventional digital computers and the massively parallel inherent nature of optics. This research represents a continuous effort during the past several years in the exploitation of using neurocomputing for pattern recognition. Based on the interpattern association (IPA) model and Hamming net model, many new systems and applications are introduced. A gray level discrete associative memory that is based on object decomposition/composition is proposed for recognizing gray-level patterns. This technique extends the processing ability from the binary mode to gray-level mode, and thus the information capacity is increased. Two polychromatic optical neural networks using color liquid crystal television (LCTV) panels for color pattern recognition are introduced. By introducing a color encoding technique in conjunction with the interpattern associative algorithm, a color associative memory was realized. Based on the color decomposition and composition technique, a color exemplar-based Hamming net was built for color image classification. A shift-invariant neural network is presented through use of the translation invariant property of the modulus of the Fourier transformation and the hetero-associative interpattern association (IPA) memory. To extract the main features, a quadrantal sampling method is used to sampled data and then replace the training patterns. Using the concept of hetero-associative memory to recall the distorted object. A shift and rotation invariant neural network using an interpattern hetero-association (IHA) model is presented. To preserve the shift and rotation invariant properties, a set of binarized-encoded circular harmonic expansion (CHE) functions at the Fourier domain is used as the training set. We use the shift and symmetric properties of the modulus of the Fourier spectrum to avoid the problem of centering the CHE functions. Almost all neural networks have the positive and negative weights, which increases the difficulty of optical implementation. A method to construct a unipolar IPA IWM is discussed. By searching the redundant interconnection links, an effective way that removes all negative links is discussed.

  16. A fast, automated, polynomial-based cosmic ray spike-removal method for the high-throughput processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2013-04-01

    Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.

  17. Defining the methodological challenges and opportunities for an effective science of sociotechnical systems and safety.

    PubMed

    Waterson, Patrick; Robertson, Michelle M; Cooke, Nancy J; Militello, Laura; Roth, Emilie; Stanton, Neville A

    2015-01-01

    An important part of the application of sociotechnical systems theory (STS) is the development of methods, tools and techniques to assess human factors and ergonomics workplace requirements. We focus in this paper on describing and evaluating current STS methods for workplace safety, as well as outlining a set of six case studies covering the application of these methods to a range of safety contexts. We also describe an evaluation of the methods in terms of ratings of their ability to address a set of theoretical and practical questions (e.g. the degree to which methods capture static/dynamic aspects of tasks and interactions between system levels). The outcomes from the evaluation highlight a set of gaps relating to the coverage and applicability of current methods for STS and safety (e.g. coverage of external influences on system functioning; method usability). The final sections of the paper describe a set of future challenges, as well as some practical suggestions for tackling these. We provide an up-to-date review of STS methods, a set of case studies illustrating their use and an evaluation of their strengths and weaknesses. The paper concludes with a 'roadmap' for future work.

  18. Automatic exposure control at single- and dual-heartbeat CTCA on a 320-MDCT volume scanner: effect of heart rate, exposure phase window setting, and reconstruction algorithm.

    PubMed

    Funama, Yoshinori; Utsunomiya, Daisuke; Taguchi, Katsuyuki; Oda, Seitaro; Shimonobo, Toshiaki; Yamashita, Yasuyuki

    2014-05-01

    To investigate whether electrocardiogram (ECG)-gated single- and dual-heartbeat computed tomography coronary angiography (CTCA) with automatic exposure control (AEC) yields images with uniform image noise at reduced radiation doses. Using an anthropomorphic chest CT phantom we performed prospectively ECG-gated single- and dual-heartbeat CTCA on a second-generation 320-multidetector CT volume scanner. The exposure phase window was set at 75%, 70-80%, 40-80%, and 0-100% and the heart rate at 60 or 80 or corr80 bpm; images were reconstructed with filtered back projection (FBP) or iterative reconstruction (IR, adaptive iterative dose reduction 3D). We applied AEC and set the image noise level to 20 or 25 HU. For each technique we determined the image noise and the radiation dose to the phantom center. With half-scan reconstruction at 60 bpm, a 70-80% phase window- and a 20-HU standard deviation (SD) setting, the imagenoise level and -variation along the z axis manifested similar curves with FBP and IR. With half-scan reconstruction, the radiation dose to the phantom center with 70-80% phase window was 18.89 and 12.34 mGy for FBP and 4.61 and 3.10 mGy for IR at an SD setting SD of 20 and 25 HU, respectively. At 80 bpm with two-segment reconstruction the dose was approximately twice that of 60 bpm at both SD settings. However, increasing radiation dose at corr80 bpm was suppressed to 1.39 times compared to 60 bpm. AEC at ECG-gated single- and dual-heartbeat CTCA controls the image noise at different radiation dose. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. A new DMSP magnetometer and auroral boundary data set and estimates of field-aligned currents in dynamic auroral boundary coordinates

    NASA Astrophysics Data System (ADS)

    Kilcommons, Liam M.; Redmon, Robert J.; Knipp, Delores J.

    2017-08-01

    We have developed a method for reprocessing the multidecadal, multispacecraft Defense Meteorological Satellite Program Special Sensor Magnetometer (DMSP SSM) data set and have applied it to 15 spacecraft years of data (DMSP Flight 16-18, 2010-2014). This Level-2 data set improves on other available SSM data sets with recalculated spacecraft locations and magnetic perturbations, artifact signal removal, representations of the observations in geomagnetic coordinates, and in situ auroral boundaries. Spacecraft locations have been recalculated using ground-tracking information. Magnetic perturbations (measured field minus modeled main field) are recomputed. The updated locations ensure the appropriate model field is used. We characterize and remove a slow-varying signal in the magnetic field measurements. This signal is a combination of ring current and measurement artifacts. A final artifact remains after processing: step discontinuities in the baseline caused by activation/deactivation of spacecraft electronics. Using coincident data from the DMSP precipitating electrons and ions instrument (SSJ4/5), we detect the in situ auroral boundaries with an improvement to the Redmon et al. (2010) algorithm. We embed the location of the aurora and an accompanying figure of merit in the Level-2 SSM data product. Finally, we demonstrate the potential of this new data set by estimating field-aligned current (FAC) density using the Minimum Variance Analysis technique. The FAC estimates are then expressed in dynamic auroral boundary coordinates using the SSJ-derived boundaries, demonstrating a dawn-dusk asymmetry in average FAC location relative to the equatorward edge of the aurora. The new SSM data set is now available in several public repositories.

  20. Impact of multisectoral health determinants on child mortality 1980-2010: An analysis by country baseline mortality.

    PubMed

    Cohen, Robert L; Murray, John; Jack, Susan; Arscott-Mills, Sharon; Verardi, Vincenzo

    2017-01-01

    Some health determinants require relatively stronger health system capacity and socioeconomic development than others to impact child mortality. Few quantitative analyses have analyzed how the impact of health determinants varies by mortality level. 149 low- and middle-income countries were stratified into high, moderate, low, and very low baseline levels of child mortality in 1990. Data for 52 health determinants were collected for these countries for 1980-2010. To quantify how changes in health determinants were associated with mortality decline, univariable and multivariable regression models were constructed. An advanced statistical technique that is new for child mortality analyses-MM-estimation with first differences and country clustering-controlled for outliers, fixed effects, and variation across decades. Some health determinants (immunizations, education) were consistently associated with child mortality reduction across all mortality levels. Others (staff availability, skilled birth attendance, fertility, water and sanitation) were associated with child mortality reduction mainly in low or very low mortality settings. The findings indicate that the impact of some health determinants on child mortality was only apparent with stronger health systems, public infrastructure and levels of socioeconomic development, whereas the impact of other determinants was apparent at all stages of development. Multisectoral progress was essential to mortality reduction at all baseline mortality levels. Policy-makers can use such analyses to direct investments in health and non-health sectors and to set five-year child mortality targets appropriate for their baseline mortality levels and local context.

  1. Nurses' knowledge of inhaler technique in the inpatient hospital setting.

    PubMed

    De Tratto, Katie; Gomez, Christy; Ryan, Catherine J; Bracken, Nina; Steffen, Alana; Corbridge, Susan J

    2014-01-01

    High rates of inhaler misuse in patients with chronic obstructive pulmonary disease and asthma contribute to hospital readmissions and increased healthcare cost. The purpose of this study was to examine inpatient staff nurses' self-perception of their knowledge of proper inhaler technique compared with demonstrated technique and frequency of providing patients with inhaler technique teaching during hospitalization and at discharge. A prospective, descriptive study. A 495-bed urban academic medical center in the Midwest United States. A convenience sample of 100 nurses working on inpatient medical units. Participants completed a 5-item, 4-point Likert-scale survey evaluating self-perception of inhaler technique knowledge, frequency of providing patient education, and responsibility for providing education. Participants demonstrated inhaler technique to the investigators using both a metered dose inhaler (MDI) and Diskus device inhaler, and performance was measured via a validated checklist. Overall misuse rates were high for both MDI and Diskus devices. There was poor correlation between perceived ability and investigator-measured performance of inhaler technique. Frequency of education during hospitalization and at discharge was related to measured level of performance for the Diskus device but not for the MDI. Nurses are a key component of patient education in the hospital; however, nursing staff lack adequate knowledge of inhaler technique. Identifying gaps in nursing knowledge regarding proper inhaler technique and patient education about proper inhaler technique is important to design interventions that may positively impact patient outcomes. Interventions could include one-on-one education, Web-based education, unit-based education, or hospital-wide competency-based education. All should include return demonstration of appropriate technique.

  2. The effects of the Bowen technique on hamstring flexibility over time: a randomised controlled trial.

    PubMed

    Marr, Michelle; Baker, Julian; Lambon, Nicky; Perry, Jo

    2011-07-01

    The hamstring muscles are regularly implicated in recurrent injuries, movement dysfunction and low back pain. Links between limited flexibility and development of neuromusculoskeletal symptoms are frequently reported. The Bowen Technique is used to treat many conditions including lack of flexibility. The study set out to investigate the effect of the Bowen Technique on hamstring flexibility over time. An assessor-blind, prospective, randomised controlled trial was performed on 120 asymptomatic volunteers. Participants were randomly allocated into a control group or Bowen group. Three flexibility measurements occurred over one week, using an active knee extension test. The intervention group received a single Bowen treatment. A repeated measures univariate analysis of variance, across both groups for the three time periods, revealed significant within-subject and between-subject differences for the Bowen group. Continuing increases in flexibility levels were observed over one week. No significant change over time was noted for the control group. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Comparison of Clustering Techniques for Residential Energy Behavior using Smart Meter Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ling; Lee, Doris; Sim, Alex

    Current practice in whole time series clustering of residential meter data focuses on aggregated or subsampled load data at the customer level, which ignores day-to-day differences within customers. This information is critical to determine each customer’s suitability to various demand side management strategies that support intelligent power grids and smart energy management. Clustering daily load shapes provides fine-grained information on customer attributes and sources of variation for subsequent models and customer segmentation. In this paper, we apply 11 clustering methods to daily residential meter data. We evaluate their parameter settings and suitability based on 6 generic performance metrics and post-checkingmore » of resulting clusters. Finally, we recommend suitable techniques and parameters based on the goal of discovering diverse daily load patterns among residential customers. To the authors’ knowledge, this paper is the first robust comparative review of clustering techniques applied to daily residential load shape time series in the power systems’ literature.« less

  4. Imaging of oxygen and hypoxia in cell and tissue samples.

    PubMed

    Papkovsky, Dmitri B; Dmitriev, Ruslan I

    2018-05-14

    Molecular oxygen (O 2 ) is a key player in cell mitochondrial function, redox balance and oxidative stress, normal tissue function and many common disease states. Various chemical, physical and biological methods have been proposed for measurement, real-time monitoring and imaging of O 2 concentration, state of decreased O 2 (hypoxia) and related parameters in cells and tissue. Here, we review the established and emerging optical microscopy techniques allowing to visualize O 2 levels in cells and tissue samples, mostly under in vitro and ex vivo, but also under in vivo settings. Particular examples include fluorescent hypoxia stains, fluorescent protein reporter systems, phosphorescent probes and nanosensors of different types. These techniques allow high-resolution mapping of O 2 gradients in live or post-mortem tissue, in 2D or 3D, qualitatively or quantitatively. They enable control and monitoring of oxygenation conditions and their correlation with other biomarkers of cell and tissue function. Comparison of these techniques and corresponding imaging setups, their analytical capabilities and typical applications are given.

  5. Dual-sensitivity profilometry with defocused projection of binary fringes.

    PubMed

    Garnica, G; Padilla, M; Servin, M

    2017-10-01

    A dual-sensitivity profilometry technique based on defocused projection of binary fringes is presented. Here, two sets of fringe patterns with a sinusoidal profile are produced by applying the same analog low-pass filter (projector defocusing) to binary fringes with a high- and low-frequency spatial carrier. The high-frequency fringes have a binary square-wave profile, while the low-frequency binary fringes are produced with error-diffusion dithering. The binary nature of the binary fringes removes the need for calibration of the projector's nonlinear gamma. Working with high-frequency carrier fringes, we obtain a high-quality wrapped phase. On the other hand, working with low-frequency carrier fringes we found a lower-quality, nonwrapped phase map. The nonwrapped estimation is used as stepping stone for dual-sensitivity temporal phase unwrapping, extending the applicability of the technique to discontinuous (piecewise continuous) surfaces. We are proposing a single defocusing level for faster high- and low-frequency fringe data acquisition. The proposed technique is validated with experimental results.

  6. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  7. Correction of stain variations in nuclear refractive index of clinical histology specimens

    PubMed Central

    Uttam, Shikhar; Bista, Rajan K.; Hartman, Douglas J.; Brand, Randall E.; Liu, Yang

    2011-01-01

    For any technique to be adopted into a clinical setting, it is imperative that it seamlessly integrates with well-established clinical diagnostic workflow. We recently developed an optical microscopy technique—spatial-domain low-coherence quantitative phase microscopy (SL-QPM) that can extract the refractive index of the cell nucleus from the standard histology specimens on glass slides prepared via standard clinical protocols. This technique has shown great potential in detecting cancer with a better sensitivity than conventional pathology. A major hurdle in the clinical translation of this technique is the intrinsic variation among staining agents used in histology specimens, which limits the accuracy of refractive index measurements of clinical samples. In this paper, we present a simple and easily generalizable method to remove the effect of variations in staining levels on nuclear refractive index obtained with SL-QPM. We illustrate the efficacy of our correction method by applying it to variously stained histology samples from animal model and clinical specimens. PMID:22112118

  8. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks

    PubMed Central

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-01-01

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856

  9. An innovative technique for the investigation of the 4-fold forbidden beta-decay of 50V

    NASA Astrophysics Data System (ADS)

    Pattavina, L.; Laubenstein, M.; Nagorny, S. S.; Nisi, S.; Pagnanini, L.; Pirro, S.; Rusconi, C.; Schäffner, K.

    2018-05-01

    For the first time a Vanadium-based crystal was operated as cryogenic particle detector. The scintillating low temperature calorimetric technique was used for the characterization of a 22g YVO4 crystal aiming at the investigation of the 4-fold forbidden non-unique β- decay of 50V. The excellent bolometric performance of the compound together with high light output of the crystal make it an outstanding technique for the study of such elusive rate process. The internal radioactive contaminations of the crystal are also investigated showing that an improvement on the current status of material selection and purification are needed, 235/238U and 232Th are measured at the level of 28mBq/kg, 1.3 Bq/kg and 28m Bq/kg, respectively. In this work, we also discuss a future upgrade of the experimental set-up which may pave the road for the detection of the rare 50V β- decay.

  10. Specific Sensory Techniques and Sensory Environmental Modifications for Children and Youth With Sensory Integration Difficulties: A Systematic Review.

    PubMed

    Bodison, Stefanie C; Parham, L Diane

    This systematic review examined the effectiveness of specific sensory techniques and sensory environmental modifications to improve participation of children with sensory integration (SI) difficulties. Abstracts of 11,436 articles published between January 2007 and May 2015 were examined. Studies were included if designs reflected high levels of evidence, participants demonstrated SI difficulties, and outcome measures addressed function or participation. Eight studies met inclusion criteria. Seven studies evaluated effects of specific sensory techniques for children with autism spectrum disorder (ASD) or attention deficit hyperactivity disorder: Qigong massage, weighted vests, slow swinging, and incorporation of multisensory activities into preschool routines. One study of sensory environmental modifications examined adaptations to a dental clinic for children with ASD. Strong evidence supported Qigong massage, moderate evidence supported sensory modifications to the dental care environment, and limited evidence supported weighted vests. The evidence is insufficient to draw conclusions regarding slow linear swinging and incorporation of multisensory activities into preschool settings. Copyright © 2018 by the American Occupational Therapy Association, Inc.

  11. Photogrammetry for Archaeology: Collecting Pieces Together

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Knyaz, V. A.; Zhuravlev, D. V.; Kurkov, V. M.

    2018-05-01

    The complexity of retrieving and understanding the archaeological data requires to apply different techniques, tools and sensors for information gathering, processing and documenting. Archaeological research now has the interdisciplinary nature involving technologies based on different physical principles for retrieving information about archaeological findings. The important part of archaeological data is visual and spatial information which allows reconstructing the appearance of the findings and relation between them. Photogrammetry has a great potential for accurate acquiring of spatial and visual data of different scale and resolution allowing to create archaeological documents of new type and quality. The aim of the presented study is to develop an approach for creating new forms of archaeological documents, a pipeline for their producing and collecting in one holistic model, describing an archaeological site. A set of techniques is developed for acquiring and integration of spatial and visual data of different level of details. The application of the developed techniques is demonstrated for documenting of Bosporus archaeological expedition of Russian State Historical Museum.

  12. Materials surface contamination analysis

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Arendale, William F.

    1992-01-01

    The original research objective was to demonstrate the ability of optical fiber spectrometry to determine contamination levels on solid rocket motor cases in order to identify surface conditions which may result in poor bonds during production. The capability of using the spectral features to identify contaminants with other sensors which might only indicate a potential contamination level provides a real enhancement to current inspection systems such as Optical Stimulated Electron Emission (OSEE). The optical fiber probe can easily fit into the same scanning fixtures as the OSEE. The initial data obtained using the Guided Wave Model 260 spectrophotometer was primarily focused on determining spectra of potential contaminants such as HD2 grease, silicones, etc. However, once we began taking data and applying multivariate analysis techniques, using a program that can handle very large data sets, i.e., Unscrambler 2, it became apparent that the techniques also might provide a nice scientific tool for determining oxidation and chemisorption rates under controlled conditions. As the ultimate power of the technique became recognized, considering that the chemical system which was most frequently studied in this work is water + D6AC steel, we became very interested in trying the spectroscopic techniques to solve a broad range of problems. The complexity of the observed spectra for the D6AC + water system is due to overlaps between the water peaks, the resulting chemisorbed species, and products of reaction which also contain OH stretching bands. Unscrambling these spectral features, without knowledge of the specific species involved, has proven to be a formidable task.

  13. Scanning Electron Microscopy and Energy-Dispersive X-Ray Microanalysis of Set CEM Cement after Application of Different Bleaching Agents.

    PubMed

    Samiei, Mohammad; Janani, Maryam; Vahdati, Amin; Alemzadeh, Yalda; Bahari, Mahmoud

    2017-01-01

    The present study evaluated the element distribution in completely set calcium-enriched mixture (CEM) cement after application of 35% carbamide peroxide, 40% hydrogen peroxide and sodium perborate as commercial bleaching agents using an energy-dispersive x-ray microanalysis (EDX) system. The surface structure was also observed using the scanning electron microscope (SEM). Twenty completely set CEM cement samples, measuring 4×4 mm 2 , were prepared in the present in vitro study and randomly divided into 4 groups based on the preparation technique as follows: the control group; 35% carbamide peroxide group in contact for 30-60 min for 4 times; 40% hydrogen peroxide group with contact time of 15-20 min for 3 times; and sodium perborate group, where the powder and liquid were mixed and placed on CEM cement surface 4 times. Data were analyzed at a significance level of 0.05 through the one Way ANOVA and Tukey's post hoc tests. EDX showed similar element distribution of oxygen, sodium, calcium and carbon in CEM cement with the use of carbamide peroxide and hydroxide peroxide; however, the distribution of silicon was different ( P <0.05). In addition, these bleaching agents resulted in significantly higher levels of oxygen and carbon ( P <0.05) and a lower level of calcium ( P <0.05) compared to the control group. SEM of the control group showed plate-like and globular structure. Sodium perborate was similar to control group due to its weak oxidizing properties. Globular structures and numerous woodpecker holes were observed on the even surface on the carbamide peroxide group. The mean elemental distribution of completely set CEM cement was different when exposed to sodium perborate, carbamide peroxide and hydrogen peroxide.

  14. Experiments on automatic classification of tissue malignancy in the field of digital pathology

    NASA Astrophysics Data System (ADS)

    Pereira, J.; Barata, R.; Furtado, Pedro

    2017-06-01

    Automated analysis of histological images helps diagnose and further classify breast cancer. Totally automated approaches can be used to pinpoint images for further analysis by the medical doctor. But tissue images are especially challenging for either manual or automated approaches, due to mixed patterns and textures, where malignant regions are sometimes difficult to detect unless they are in very advanced stages. Some of the major challenges are related to irregular and very diffuse patterns, as well as difficulty to define winning features and classifier models. Although it is also hard to segment correctly into regions, due to the diffuse nature, it is still crucial to take low-level features over individualized regions instead of the whole image, and to select those with the best outcomes. In this paper we report on our experiments building a region classifier with a simple subspace division and a feature selection model that improves results over image-wide and/or limited feature sets. Experimental results show modest accuracy for a set of classifiers applied over the whole image, while the conjunction of image division, per-region low-level extraction of features and selection of features, together with the use of a neural network classifier achieved the best levels of accuracy for the dataset and settings we used in the experiments. Future work involves deep learning techniques, adding structures semantics and embedding the approach as a tumor finding helper in a practical Medical Imaging Application.

  15. Linking stroke mortality with air pollution, income, and greenness in northwest Florida: an ecological geographical study

    PubMed Central

    Hu, Zhiyong; Liebens, Johan; Rao, K Ranga

    2008-01-01

    Background Relatively few studies have examined the association between air pollution and stroke mortality. Inconsistent and inclusive results from existing studies on air pollution and stroke justify the need to continue to investigate the linkage between stroke and air pollution. No studies have been done to investigate the association between stroke and greenness. The objective of this study was to examine if there is association of stroke with air pollution, income and greenness in northwest Florida. Results Our study used an ecological geographical approach and dasymetric mapping technique. We adopted a Bayesian hierarchical model with a convolution prior considering five census tract specific covariates. A 95% credible set which defines an interval having a 0.95 posterior probability of containing the parameter for each covariate was calculated from Markov Chain Monte Carlo simulations. The 95% credible sets are (-0.286, -0.097) for household income, (0.034, 0.144) for traffic air pollution effect, (0.419, 1.495) for emission density of monitored point source polluters, (0.413, 1.522) for simple point density of point source polluters without emission data, and (-0.289,-0.031) for greenness. Household income and greenness show negative effects (the posterior densities primarily cover negative values). Air pollution covariates have positive effects (the 95% credible sets cover positive values). Conclusion High risk of stroke mortality was found in areas with low income level, high air pollution level, and low level of exposure to green space. PMID:18452609

  16. Surveillance metrics sensitivity study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  17. Surveillance Metrics Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculationsmore » and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.« less

  18. Search for dark matter decay of the free neutron from the UCNA experiment: n → χ + e + e –

    DOE PAGES

    Sun, X.; Adamek, E.; Allgeier, B.; ...

    2018-05-21

    It has been proposed recently that a previously unobserved neutron decay branch to a dark matter particle (χ) could account for the discrepancy in the neutron lifetime observed in experiments that use two different measurement techniques. One of the possible final states discussed includes a single χ along with an e +e – pair. We use data from the UCNA (Ultracold Neutron Asymmetry) experiment to set limits on this decay channel. Coincident electron-like events are detected with ~4π acceptance using a pair of detectors that observe a volume of stored ultracold neutrons. The summed kinetic energy (E e+e–) from suchmore » events is used to set limits, as a function of the χ mass, on the branching fraction for this decay channel. For χ masses consistent with resolving the neutron lifetime discrepancy, we exclude this as the dominant dark matter decay channel at >>5σ level for 100 < E e+e– < 644keV. In conclusion, if the χ+e +e – final state is not the only one, we set limits on its branching fraction of <10 –4 for the above E e+e– range at >90% confidence level.« less

  19. Place and Child Health: The Interaction of Population Density and Sanitation in Developing Countries.

    PubMed

    Hathi, Payal; Haque, Sabrina; Pant, Lovey; Coffey, Diane; Spears, Dean

    2017-02-01

    A long literature in demography has debated the importance of place for health, especially children's health. In this study, we assess whether the importance of dense settlement for infant mortality and child height is moderated by exposure to local sanitation behavior. Is open defecation (i.e., without a toilet or latrine) worse for infant mortality and child height where population density is greater? Is poor sanitation is an important mechanism by which population density influences child health outcomes? We present two complementary analyses using newly assembled data sets, which represent two points in a trade-off between external and internal validity. First, we concentrate on external validity by studying infant mortality and child height in a large, international child-level data set of 172 Demographic and Health Surveys, matched to census population density data for 1,800 subnational regions. Second, we concentrate on internal validity by studying child height in Bangladeshi districts, using a new data set constructed with GIS techniques that allows us to control for fixed effects at a high level of geographic resolution. We find a statistically robust and quantitatively comparable interaction between sanitation and population density with both approaches: open defecation externalities are more important for child health outcomes where people live more closely together.

  20. Search for dark matter decay of the free neutron from the UCNA experiment: n → χ + e + e –

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, X.; Adamek, E.; Allgeier, B.

    It has been proposed recently that a previously unobserved neutron decay branch to a dark matter particle (χ) could account for the discrepancy in the neutron lifetime observed in experiments that use two different measurement techniques. One of the possible final states discussed includes a single χ along with an e +e – pair. We use data from the UCNA (Ultracold Neutron Asymmetry) experiment to set limits on this decay channel. Coincident electron-like events are detected with ~4π acceptance using a pair of detectors that observe a volume of stored ultracold neutrons. The summed kinetic energy (E e+e–) from suchmore » events is used to set limits, as a function of the χ mass, on the branching fraction for this decay channel. For χ masses consistent with resolving the neutron lifetime discrepancy, we exclude this as the dominant dark matter decay channel at >>5σ level for 100 < E e+e– < 644keV. In conclusion, if the χ+e +e – final state is not the only one, we set limits on its branching fraction of <10 –4 for the above E e+e– range at >90% confidence level.« less

Top