Sample records for processing methods applied

  1. Process safety improvement--quality and target zero.

    PubMed

    Van Scyoc, Karl

    2008-11-15

    Process safety practitioners have adopted quality management principles in design of process safety management systems with positive effect, yet achieving safety objectives sometimes remain a distant target. Companies regularly apply tools and methods which have roots in quality and productivity improvement. The "plan, do, check, act" improvement loop, statistical analysis of incidents (non-conformities), and performance trending popularized by Dr. Deming are now commonly used in the context of process safety. Significant advancements in HSE performance are reported after applying methods viewed as fundamental for quality management. In pursuit of continual process safety improvement, the paper examines various quality improvement methods, and explores how methods intended for product quality can be additionally applied to continual improvement of process safety. Methods such as Kaizen, Poke yoke, and TRIZ, while long established for quality improvement, are quite unfamiliar in the process safety arena. These methods are discussed for application in improving both process safety leadership and field work team performance. Practical ways to advance process safety, based on the methods, are given.

  2. Applied Behavior Analysis and Statistical Process Control?

    ERIC Educational Resources Information Center

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  3. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  4. Estimating costs and performance of systems for machine processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ballard, R. J.; Eastwood, L. F., Jr.

    1977-01-01

    This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.

  5. Neuro-parity pattern recognition system and method

    DOEpatents

    Gross, Kenneth C.; Singer, Ralph M.; Van Alstine, Rollin G.; Wegerich, Stephan W.; Yue, Yong

    2000-01-01

    A method and system for monitoring a process and determining its condition. Initial data is sensed, a first set of virtual data is produced by applying a system state analyzation to the initial data, a second set of virtual data is produced by applying a neural network analyzation to the initial data and a parity space analyzation is applied to the first and second set of virtual data and also to the initial data to provide a parity space decision about the condition of the process. A logic test can further be applied to produce a further system decision about the state of the process.

  6. Applying an analytical method to study neutron behavior for dosimetry

    NASA Astrophysics Data System (ADS)

    Shirazi, S. A. Mousavi

    2016-12-01

    In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.

  7. A Multilevel Comprehensive Assessment of International Accreditation for Business Programmes-Based on AMBA Accreditation of GDUFS

    ERIC Educational Resources Information Center

    Jiang, Yong

    2017-01-01

    Traditional mathematical methods built around exactitude have limitations when applied to the processing of educational information, due to their uncertainty and imperfection. Alternative mathematical methods, such as grey system theory, have been widely applied in processing incomplete information systems and have proven effective in a number of…

  8. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  9. 32 CFR 22.105 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... applying existing technology to new products and processes in a general way. Advanced research is most... Category 6.3A) programs within Research, Development, Test and Evaluation (RDT&E). Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Applied Research...

  10. Planungsmodelle und Planungsmethoden. Anhaltspunkte zur Strukturierung und Gestaltung von Planungsprozessen

    NASA Astrophysics Data System (ADS)

    Diller, Christian; Karic, Sarah; Oberding, Sarah

    2017-06-01

    The topic of this article ist the question, in which phases oft he political planning process planners apply their methodological set of tools. That for the results of a research-project are presented, which were gained by an examination of planning-cases in learned journals. Firstly it is argued, which model oft he planning-process is most suitable to reflect the regarded cases and how it is positioned to models oft he political process. Thereafter it is analyzed, which types of planning methods are applied in the several stages oft he planning process. The central findings: Although complex, many planning processes can be thouroughly pictured by a linear modell with predominantly simple feedback loops. Even in times of he communicative turn, concerning their set of tools, planners should pay attention to apply not only communicative methods but as well the classical analytical-rational methods. They are helpful especially for the understanding of the political process before and after the actual planning phase.

  11. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  12. Pre-clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2010-10-01

    Downloaded on February 20,2010 at 10:55:59 EST from IEEE Xplore . Restrictions apply. STUDENSKI et al.: ACQUISITION AND PROCESSING METHODS FOR A BEDSIDE...February 20,2010 at 10:55:59 EST from IEEE Xplore . Restrictions apply. 208 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 57, NO. 1, FEBRUARY 2010 from the...59 EST from IEEE Xplore . Restrictions apply. STUDENSKI et al.: ACQUISITION AND PROCESSING METHODS FOR A BEDSIDE CARDIAC SPECT IMAGING SYSTEM 209

  13. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  14. [Work organisation improvement methods applied to activities of Blood Transfusion Establishments (BTE): Lean Manufacturing, VSM, 5S].

    PubMed

    Bertholey, F; Bourniquel, P; Rivery, E; Coudurier, N; Follea, G

    2009-05-01

    Continuous improvement of efficiency as well as new expectations from customers (quality and safety of blood products) and employees (working conditions) imply constant efforts in Blood Transfusion Establishments (BTE) to improve work organisations. The Lean method (from "Lean" meaning "thin") aims at identifying wastages in the process (overproduction, waiting, over-processing, inventory, transport, motion) and then reducing them in establishing a mapping of value chain (Value Stream Mapping). It consists in determining the added value of each step of the process from a customer perspective. Lean also consists in standardizing operations while implicating and responsabilizing all collaborators. The name 5S comes from the first letter of five operations of a Japanese management technique: to clear, rank, keep clean, standardize, make durable. The 5S method leads to develop the team working inducing an evolution of the way in the management is performed. The Lean VSM method has been applied to blood processing (component laboratory) in the Pays de la Loire BTE. The Lean 5S method has been applied to blood processing, quality control, purchasing, warehouse, human resources and quality assurance in the Rhône-Alpes BTE. The experience returns from both BTE shows that these methods allowed improving: (1) the processes and working conditions from a quality perspective, (2) the staff satisfaction, (3) the efficiency. These experiences, implemented in two BTE for different processes, confirm the applicability and usefulness of these methods to improve working organisations in BTE.

  15. Solid Phase Extraction (SPE) for Biodiesel Processing and Analysis

    DTIC Science & Technology

    2017-12-13

    1 METHODS ...sources. There are several methods than can be applied to development of separation techniques that may replace necessary water wash steps in...biodiesel refinement. Unfortunately, the most common methods are poorly suited or face high costs when applied to diesel purification. Distillation is

  16. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  17. An analytical approach to customer requirement information processing

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong

    2013-11-01

    'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.

  18. Flexible ITO-free organic solar cells applying aqueous solution-processed V2O5 hole transport layer: An outdoor stability study

    NASA Astrophysics Data System (ADS)

    Lima, F. Anderson S.; Beliatis, Michail J.; Roth, Bérenger; Andersen, Thomas R.; Bortoti, Andressa; Reyna, Yegraf; Castro, Eryza; Vasconcelos, Igor F.; Gevorgyan, Suren A.; Krebs, Frederik C.; Lira-Cantu, Mónica

    2016-02-01

    Solution processable semiconductor oxides have opened a new paradigm for the enhancement of the lifetime of thin film solar cells. Their fabrication by low-cost and environmentally friendly solution-processable methods makes them ideal barrier (hole and electron) transport layers. In this work, we fabricate flexible ITO-free organic solar cells (OPV) by printing methods applying an aqueous solution-processed V2O5 as the hole transport layer (HTL) and compared them to devices applying PEDOT:PSS. The transparent conducting electrode was PET/Ag/PEDOT/ZnO, and the OPV configuration was PET/Ag/PEDOT/ZnO/P3HT:PC60BM/HTL/Ag. Outdoor stability analyses carried out for more than 900 h revealed higher stability for devices fabricated with the aqueous solution-processed V2O5.

  19. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  20. Rapid method for sampling metals for materials identification

    NASA Technical Reports Server (NTRS)

    Higgins, L. E.

    1971-01-01

    Nondamaging process similar to electrochemical machining is useful in obtaining metal samples from places inaccessible to conventional sampling methods or where methods would be hazardous or contaminating to specimens. Process applies to industries where metals or metal alloys play a vital role.

  1. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  2. Crop Row Detection in Maize Fields Inspired on the Human Visual Perception

    PubMed Central

    Romeo, J.; Pajares, G.; Montalvo, M.; Guerrero, J. M.; Guijarro, M.; Ribeiro, A.

    2012-01-01

    This paper proposes a new method, oriented to image real-time processing, for identifying crop rows in maize fields in the images. The vision system is designed to be installed onboard a mobile agricultural vehicle, that is, submitted to gyros, vibrations, and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of two main processes: image segmentation and crop row detection. The first one applies a threshold to separate green plants or pixels (crops and weeds) from the rest (soil, stones, and others). It is based on a fuzzy clustering process, which allows obtaining the threshold to be applied during the normal operation process. The crop row detection applies a method based on image perspective projection that searches for maximum accumulation of segmented green pixels along straight alignments. They determine the expected crop lines in the images. The method is robust enough to work under the above-mentioned undesired effects. It is favorably compared against the well-tested Hough transformation for line detection. PMID:22623899

  3. Intelligent methods for the process parameter determination of plastic injection molding

    NASA Astrophysics Data System (ADS)

    Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn

    2018-03-01

    Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.

  4. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  5. Development of new maskless manufacturing method for anti-reflection structure and application to large-area lens with curved surface

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuya; Takaoka, Toshimitsu; Fukui, Hidetoshi; Haruta, Yasuyuki; Yamashita, Tomoya; Kitagawa, Seiichiro

    2016-03-01

    In general, thin-film coating process is widely applied on optical lens surface as anti-reflection function. In normal production process, at first lens is manufactured by molding, then anti-reflection is added by thin-film coating. In recent years, instead of thin-film coating, sub-wavelength structures adding on surface of molding die are widely studied and development to keep anti-reflection performance. As merits, applying sub-wavelength structure, coating process becomes unnecessary and it is possible to reduce man-hour costs. In addition to cost merit, these are some technical advantages on this study. Adhesion of coating depends on material of plastic, and it is impossible to apply anti-reflection function on arbitrary surface. Sub-wavelength structure can solve both problems. Manufacturing method of anti-reflection structure can be divided into two types mainly. One method is with the resist patterning, and the other is mask-less method that does not require patterning. What we have developed is new mask-less method which is no need for resist patterning and possible to impart an anti-reflection structure to large area and curved lens surface, and can be expected to apply to various market segments. We report developed technique and characteristics of production lens.

  6. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  7. Validation of a pulsed electric field process to pasteurize strawberry puree

    USDA-ARS?s Scientific Manuscript database

    An inexpensive data acquisition method was developed to validate the exact number and shape of the pulses applied during pulsed electric fields (PEF) processing. The novel validation method was evaluated in conjunction with developing a pasteurization PEF process for strawberry puree. Both buffered...

  8. Research on the raw data processing method of the hydropower construction project

    NASA Astrophysics Data System (ADS)

    Tian, Zhichao

    2018-01-01

    In this paper, based on the characteristics of the fixed data, this paper compares the various mathematical statistics analysis methods and chooses the improved Grabs criterion to analyze the data, and through the analysis of the data processing, the data processing method is not suitable. It is proved that this method can be applied to the processing of fixed raw data. This paper provides a reference for reasonably determining the effective quota analysis data.

  9. A systematic and critical review on bioanalytical method validation using the example of simultaneous quantitation of antidiabetic agents in blood.

    PubMed

    Fachi, Mariana Millan; Leonart, Letícia Paula; Cerqueira, Letícia Bonancio; Pontes, Flavia Lada Degaut; de Campos, Michel Leandro; Pontarolo, Roberto

    2017-06-15

    A systematic and critical review was conducted on bioanalytical methods validated to quantify combinations of antidiabetic agents in human blood. The aim of this article was to verify how the validation process of bioanalytical methods is performed and the quality of the published records. The validation assays were evaluated according to international guidelines. The main problems in the validation process are pointed out and discussed to help researchers to choose methods that are truly reliable and can be successfully applied for their intended use. The combination of oral antidiabetic agents was chosen as these are some of the most studied drugs and several methods are present in the literature. Moreover, this article may be applied to the validation process of all bioanalytical. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A Simple and Useful Method to Apply Exogenous NO Gas to Plant Systems: Bell Pepper Fruits as a Model.

    PubMed

    Palma, José M; Ruiz, Carmelo; Corpas, Francisco J

    2018-01-01

    Nitric oxide (NO) is involved many physiological plant processes, including germination, growth and development of roots, flower setting and development, senescence, and fruit ripening. In the latter physiological process, NO has been reported to play an opposite role to ethylene. Thus, treatment of fruits with NO may lead to delay ripening independently of whether they are climacteric or nonclimacteric. In many cases different methods have been reported to apply NO to plant systems involving sodium nitroprusside, NONOates, DETANO, or GSNO to investigate physiological and molecular consequences. In this chapter a method to treat plant materials with NO is provided using bell pepper fruits as a model. This method is cheap, free of side effects, and easy to apply since it only requires common chemicals and tools available in any biology laboratory.

  11. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  12. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  13. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  14. Optimization under variability and uncertainty: a case study for NOx emissions control for a gasification system.

    PubMed

    Chen, Jianjun; Frey, H Christopher

    2004-12-15

    Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.

  15. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.

    1989-01-01

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.

  16. Towards Real Time Diagnostics of Hybrid Welding Laser/GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy Mcjunkin; Dennis C. Kunerth; Corrie Nichol

    2013-07-01

    Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defectsmore » or precursors to defects and correct when possible during the weld process.« less

  17. Towards real time diagnostics of Hybrid Welding Laser/GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, T. R.; Kunerth, D. C.; Nichol, C. I.

    2014-02-18

    Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defectsmore » or precursors to defects and correct when possible during the weld process.« less

  18. Towards real time diagnostics of Hybrid Welding Laser/GMAW

    NASA Astrophysics Data System (ADS)

    McJunkin, T. R.; Kunerth, D. C.; Nichol, C. I.; Todorov, E.; Levesque, S.

    2014-02-01

    Methods are currently being developed towards a more robust system real time feedback in the high throughput process combining laser welding with gas metal arc welding. A combination of ultrasonic, eddy current, electronic monitoring, and visual techniques are being applied to the welding process. Initial simulation and bench top evaluation of proposed real time techniques on weld samples are presented along with the concepts to apply the techniques concurrently to the weld process. Consideration for the eventual code acceptance of the methods and system are also being researched as a component of this project. The goal is to detect defects or precursors to defects and correct when possible during the weld process.

  19. Evaluation of multivariate calibration models with different pre-processing and processing algorithms for a novel resolution and quantitation of spectrally overlapped quaternary mixture in syrup

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza A.; Hegazy, Maha A.; Mohamed, Dalia; Ali, Omnia

    2016-02-01

    A novel approach for the resolution and quantitation of severely overlapped quaternary mixture of carbinoxamine maleate (CAR), pholcodine (PHL), ephedrine hydrochloride (EPH) and sunset yellow (SUN) in syrup was demonstrated utilizing different spectrophotometric assisted multivariate calibration methods. The applied methods have used different processing and pre-processing algorithms. The proposed methods were partial least squares (PLS), concentration residuals augmented classical least squares (CRACLS), and a novel method; continuous wavelet transforms coupled with partial least squares (CWT-PLS). These methods were applied to a training set in the concentration ranges of 40-100 μg/mL, 40-160 μg/mL, 100-500 μg/mL and 8-24 μg/mL for the four components, respectively. The utilized methods have not required any preliminary separation step or chemical pretreatment. The validity of the methods was evaluated by an external validation set. The selectivity of the developed methods was demonstrated by analyzing the drugs in their combined pharmaceutical formulation without any interference from additives. The obtained results were statistically compared with the official and reported methods where no significant difference was observed regarding both accuracy and precision.

  20. Time-Domain Receiver Function Deconvolution using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.

    2017-12-01

    Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.

  1. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  2. Digital Signal Processing Based on a Clustering Algorithm for Ir/Au TES Microcalorimeter

    NASA Astrophysics Data System (ADS)

    Zen, N.; Kunieda, Y.; Takahashi, H.; Hiramoto, K.; Nakazawa, M.; Fukuda, D.; Ukibe, M.; Ohkubo, M.

    2006-02-01

    In recent years, cryogenic microcalorimeters using their superconducting transition edge have been under development for possible application to the research for astronomical X-ray observations. To improve the energy resolution of superconducting transition edge sensors (TES), several correction methods have been developed. Among them, a clustering method based on digital signal processing has recently been proposed. In this paper, we applied the clustering method to Ir/Au bilayer TES. This method resulted in almost a 10% improvement in the energy resolution. Conversely, from the point of view of imaging X-ray spectroscopy, we applied the clustering method to pixellated Ir/Au-TES devices. We will thus show how a clustering method which sorts signals by their shapes is also useful for position identification

  3. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  4. Data-based hybrid tension estimation and fault diagnosis of cold rolling continuous annealing processes.

    PubMed

    Liu, Qiang; Chai, Tianyou; Wang, Hong; Qin, Si-Zhao Joe

    2011-12-01

    The continuous annealing process line (CAPL) of cold rolling is an important unit to improve the mechanical properties of steel strips in steel making. In continuous annealing processes, strip tension is an important factor, which indicates whether the line operates steadily. Abnormal tension profile distribution along the production line can lead to strip break and roll slippage. Therefore, it is essential to estimate the whole tension profile in order to prevent the occurrence of faults. However, in real annealing processes, only a limited number of strip tension sensors are installed along the machine direction. Since the effects of strip temperature, gas flow, bearing friction, strip inertia, and roll eccentricity can lead to nonlinear tension dynamics, it is difficult to apply the first-principles induced model to estimate the tension profile distribution. In this paper, a novel data-based hybrid tension estimation and fault diagnosis method is proposed to estimate the unmeasured tension between two neighboring rolls. The main model is established by an observer-based method using a limited number of measured tensions, speeds, and currents of each roll, where the tension error compensation model is designed by applying neural networks principal component regression. The corresponding tension fault diagnosis method is designed using the estimated tensions. Finally, the proposed tension estimation and fault diagnosis method was applied to a real CAPL in a steel-making company, demonstrating the effectiveness of the proposed method.

  5. Supervision of Ethylene Propylene Diene M-Class (EPDM) Rubber Vulcanization and Recovery Processes Using Attenuated Total Reflection Fourier Transform Infrared (ATR FT-IR) Spectroscopy and Multivariate Analysis.

    PubMed

    Riba Ruiz, Jordi-Roger; Canals, Trini; Cantero, Rosa

    2017-01-01

    Ethylene propylene diene monomer (EPDM) rubber is widely used in a diverse type of applications, such as the automotive, industrial and construction sectors among others. Due to its appealing features, the consumption of vulcanized EPDM rubber is growing significantly. However, environmental issues are forcing the application of devulcanization processes to facilitate recovery, which has led rubber manufacturers to implement strict quality controls. Consequently, it is important to develop methods for supervising the vulcanizing and recovery processes of such products. This paper deals with the supervision process of EPDM compounds by means of Fourier transform mid-infrared (FT-IR) spectroscopy and suitable multivariate statistical methods. An expedited and nondestructive classification approach was applied to a sufficient number of EPDM samples with different applied processes, that is, with and without application of vulcanizing agents, vulcanized samples, and microwave treated samples. First the FT-IR spectra of the samples is acquired and next it is processed by applying suitable feature extraction methods, i.e., principal component analysis and canonical variate analysis to obtain the latent variables to be used for classifying test EPDM samples. Finally, the k nearest neighbor algorithm was used in the classification stage. Experimental results prove the accuracy of the proposed method and the potential of FT-IR spectroscopy in this area, since the classification accuracy can be as high as 100%.

  6. Evaluation of the Technical Adequacy of Three Methods for Identifying Specific Learning Disabilities Based on Cognitive Discrepancies

    ERIC Educational Resources Information Center

    Stuebing, Karla K.; Fletcher, Jack M.; Branum-Martin, Lee; Francis, David J.

    2012-01-01

    This study used simulation techniques to evaluate the technical adequacy of three methods for the identification of specific learning disabilities via patterns of strengths and weaknesses in cognitive processing. Latent and observed data were generated and the decision-making process of each method was applied to assess concordance in…

  7. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  8. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, H.B.; Einerson, C.J.; Watkins, A.D.

    1987-08-10

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.

  9. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  10. Scientist's Idealism Vs. User's Realism for Orthorectification of Full Radarsat-2/Compact RCM Polarimetric Data with DSM

    NASA Astrophysics Data System (ADS)

    Toutin, Thierry; Wang, Huili; Charbonneau, Francois; Schmitt, Carla

    2013-08-01

    This paper presented two methods for the orthorectification of full/compact polarimetric SAR data: the polarimetric processing is performed in the image space (scientist's idealism) or in the ground space (user's realism) before or after the geometric processing, respectively. Radarsat-2 (R2) fine-quad and simulated very high-resolution RCM data acquired with different look angles over a hilly relief study site were processed using accurate lidar digital surface model. Quantitative evaluations between the two methods as a function of different geometric and radiometric parameters were performed to evaluate the impact during the orthorectification. The results demonstrated that the ground-space method can be safely applied to polarimetric R2 SAR data with an exception with the steep look angles and steep terrain slopes. On the other hand, the ground-space method cannot be applied to simulated compact RCM data due to 17dB noise floor and oversampling.

  11. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less

  12. 32 CFR 37.1220 - Applied research.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 1 2014-07-01 2014-07-01 false Applied research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Research...

  13. 32 CFR 37.1220 - Applied research.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Applied research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Research...

  14. 32 CFR 37.1220 - Applied research.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Applied research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Research...

  15. 32 CFR 37.1220 - Applied research.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false Applied research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Research...

  16. 32 CFR 37.1220 - Applied research.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Applied research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 Applied research... technology such as new materials, devices, methods and processes. It typically is funded in Research...

  17. 40 CFR 408.11 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STANDARDS CANNED AND PRESERVED SEAFOOD PROCESSING POINT SOURCE CATEGORY Farm-Raised Catfish Processing... apply to this subpart. (b) The term oil and grease shall mean those components of a waste water amenable to measurement by the method described in Methods for Chemical Analysis of Water and Wastes, 1971...

  18. Lean manufacturing and Toyota Production System terminology applied to the procurement of vascular stents in interventional radiology.

    PubMed

    de Bucourt, Maximilian; Busse, Reinhard; Güttler, Felix; Wintzer, Christian; Collettini, Federico; Kloeters, Christian; Hamm, Bernd; Teichgräber, Ulf K

    2011-08-01

    OBJECTIVES: To apply the economic terminology of lean manufacturing and the Toyota Production System to the procurement of vascular stents in interventional radiology. METHODS: The economic- and process-driven terminology of lean manufacturing and the Toyota Production System is first presented, including information and product flow as well as value stream mapping (VSM), and then applied to an interdisciplinary setting of physicians, nurses and technicians from different medical departments to identify wastes in the process of endovascular stent procurement in interventional radiology. RESULTS: Using the so-called seven wastes approach of the Toyota Production System (waste of overproducing, waiting, transport, processing, inventory, motion and waste of defects and spoilage) as well as further waste characteristics (gross waste, process and method waste, and micro waste), wastes in the process of endovascular stent procurement in interventional radiology were identified and eliminated to create an overall smoother process from the procurement as well as from the medical perspective. CONCLUSION: Economic terminology of lean manufacturing and the Toyota Production System, especially VSM, can be used to visualise and better understand processes in the procurement of vascular stents in interventional radiology from an economic point of view.

  19. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared.

  20. Speckle noise removal applied to ultrasound image of carotid artery based on total least squares model.

    PubMed

    Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao

    2016-10-06

    An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.

  1. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  2. Micro Dot Patterning on the Light Guide Panel Using Powder Blasting

    PubMed Central

    Jang, Ho Su; Cho, Myeong Woo; Park, Dong Sam

    2008-01-01

    This study is to develop a micromachining technology for a light guide panel(LGP) mold, whereby micro dot patterns are formed on a LGP surface by a single injection process instead of existing screen printing processes. The micro powder blasting technique is applied to form micro dot patterns on the LGP mold surface. The optimal conditions for masking, laminating, exposure, and developing processes to form the micro dot patterns are first experimentally investigated. A LGP mold with masked micro patterns is then machined using the micro powder blasting method and the machinability of the micro dot patterns is verified. A prototype LGP is test- injected using the developed LGP mold and a shape analysis of the patterns and performance testing of the injected LGP are carried out. As an additional approach, matte finishing, a special surface treatment method, is applied to the mold surface to improve the light diffusion characteristics, uniformity and brightness of the LGP. The results of this study show that the applied powder blasting method can be successfully used to manufacture LGPs with micro patterns by just single injection using the developed mold and thereby replace existing screen printing methods. PMID:27879740

  3. 75 FR 28294 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-The Applied...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ... Production Act of 1993--The Applied Nanotechnology Consortium Notice is hereby given that, on March 26, 2010... seq. (``the Act''), The Applied Nanotechnology Consortium (``TANC'') has filed written notifications... to nanotechnology: (a) Nanoparticle Production Methods/Processing of Nano Composites; (b) Laser...

  4. Applying Mixed Methods Techniques in Strategic Planning

    ERIC Educational Resources Information Center

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  5. Laser beam heat method reported

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Hachiro; Goto, Hidekazu

    1988-07-01

    An outline of research involving the processing method utilizing laser-induced thermochemistry was presented, with the CO2 laser processing of ceramics in CF4 gas used as a practical processing example. It has become clear that it will be possible to conduct laser proccessing of ceramics with high efficiency and high precision by utilizing the thermochemical processes, but it is not believed that the present method is the best one and it is not clear that it can be applied to commercial processing. It is thought that the processing characteristics of this method will be greatly changed by the combination of the atmospheric gas and the material, and it is important to conduct tests on various combinations. However, it is believed that the improvement and development will become possible by theoretically confirming the basic process of the processing, especially of the the thermochemical process between the solid surface and the atmospheric gas molecule. Actually, it is believed that the thermochemical process on the solid surface is quite complicated. For example, it was confirmed that when thermochemical processing the Si monocrystal in the CF4 gas, the processing speed would change by at least 10 times through changing the gas pressure and the mixing O2 gas density. However, conversely speaking, it is believed that the fact that this method is complicated, with many unexplained points and room for research, conceals the possibility of its being applied to various fields, and also, in this sense, the quantitative confirmation of its basic process in an important problem to be solved in the future.

  6. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Treesearch

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  7. A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1976-01-01

    An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.

  8. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  9. Micro Dot Patterning on the Light Guide Panel Using Powder Blasting.

    PubMed

    Jang, Ho Su; Cho, Myeong Woo; Park, Dong Sam

    2008-02-08

    This study is to develop a micromachining technology for a light guidepanel(LGP) mold, whereby micro dot patterns are formed on a LGP surface by a singleinjection process instead of existing screen printing processes. The micro powder blastingtechnique is applied to form micro dot patterns on the LGP mold surface. The optimalconditions for masking, laminating, exposure, and developing processes to form the microdot patterns are first experimentally investigated. A LGP mold with masked micro patternsis then machined using the micro powder blasting method and the machinability of themicro dot patterns is verified. A prototype LGP is test- injected using the developed LGPmold and a shape analysis of the patterns and performance testing of the injected LGP arecarried out. As an additional approach, matte finishing, a special surface treatment method,is applied to the mold surface to improve the light diffusion characteristics, uniformity andbrightness of the LGP. The results of this study show that the applied powder blastingmethod can be successfully used to manufacture LGPs with micro patterns by just singleinjection using the developed mold and thereby replace existing screen printing methods.

  10. Process Mining-Based Method of Designing and Optimizing the Layouts of Emergency Departments in Hospitals.

    PubMed

    Rismanchian, Farhood; Lee, Young Hoon

    2017-07-01

    This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.

  11. Preferred skin color enhancement for photographic color reproduction

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Luo, Ronnier

    2011-01-01

    Skin tones are the most important colors among the memory color category. Reproducing skin colors pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the color preference of skin color reproduction. Several methods to morph skin colors to a smaller preferred skin color region has been reported in the past. In this paper, a new approach is proposed to further improve the result of skin color enhancement. An ellipsoid skin color model is applied to compute skin color probabilities for skin color detection and to determine a weight for skin color adjustment. Preferred skin color centers determined through psychophysical experiments were applied for color adjustment. Preferred skin color centers for dark, medium, and light skin colors are applied to adjust skin colors differently. Skin colors are morphed toward their preferred color centers. A special processing is applied to avoid contrast loss in highlight. A 3-D interpolation method is applied to fix a potential contouring problem and to improve color processing efficiency. An psychophysical experiment validates that the method of preferred skin color enhancement effectively identifies skin colors, improves the skin color preference, and does not objectionably affect preferred skin colors in original images.

  12. Enhancing healthcare process design with human factors engineering and reliability science, part 2: applying the knowledge to clinical documentation systems.

    PubMed

    Boston-Fleischhauer, Carol

    2008-02-01

    The demand to redesign healthcare processes that achieve efficient, effective, and safe results is never-ending. Part 1 of this 2-part series introduced human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare organizations. In part 2, the author applies this knowledge to one of the most common operational processes in healthcare: clinical documentation. Specific implementation strategies and anticipated results are discussed, along with organizational challenges and recommended executive responses.

  13. Disintegration impact on sludge digestion process.

    PubMed

    Dauknys, Regimantas; Rimeika, Mindaugas; Jankeliūnaitė, Eglė; Mažeikienė, Aušra

    2016-11-01

    The anaerobic sludge digestion is a widely used method for sludge stabilization in wastewater treatment plant. This process can be improved by applying the sludge disintegration methods. As the sludge disintegration is not investigated enough, an analysis of how the application of thermal hydrolysis affects the sludge digestion process based on full-scale data was conducted. The results showed that the maximum volatile suspended solids (VSS) destruction reached the value of 65% independently on the application of thermal hydrolysis. The average VSS destruction increased by 14% when thermal hydrolysis was applied. In order to have the maximum VSS reduction and biogas production, it is recommended to keep the maximum defined VSS loading of 5.7 kg VSS/m(3)/d when the thermal hydrolysis is applied and to keep the VSS loading between 2.1-2.4 kg VSS/m(3)/d when the disintegration of sludge is not applied. The application of thermal hydrolysis leads to an approximately 2.5 times higher VSS loading maintenance comparing VSS loading without the disintegration; therefore, digesters with 1.8 times smaller volume is required.

  14. Applying Quality Management Process-Improvement Principles to Learning in Reading Courses: An Improved Learning and Retention Method.

    ERIC Educational Resources Information Center

    Hahn, William G.; Bart, Barbara D.

    2003-01-01

    Business students were taught a total quality management-based outlining process for course readings and a tally method to measure learning efficiency. Comparison of 233 who used the process and 99 who did not showed that the group means of users' test scores were 12.4 points higher than those of nonusers. (Contains 25 references.) (SK)

  15. The Taguchi Method Application to Improve the Quality of a Sustainable Process

    NASA Astrophysics Data System (ADS)

    Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.

    2018-06-01

    Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.

  16. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  17. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  18. The Data-to-Action Framework: A Rapid Program Improvement Process

    ERIC Educational Resources Information Center

    Zakocs, Ronda; Hill, Jessica A.; Brown, Pamela; Wheaton, Jocelyn; Freire, Kimberley E.

    2015-01-01

    Although health education programs may benefit from quality improvement methods, scant resources exist to help practitioners apply these methods for program improvement. The purpose of this article is to describe the Data-to-Action framework, a process that guides practitioners through rapid-feedback cycles in order to generate actionable data to…

  19. Quantitative Infrared Spectroscopy in Challenging Environments: Applications to Passive Remote Sensing and Process Monitoring

    DTIC Science & Technology

    2012-12-01

    IR remote sensing o ers a measurement method to detect gaseous species in the outdoor environment. Two major obstacles limit the application of this... method in quantitative analysis : (1) the e ect of both temperature and concentration on the measured spectral intensities and (2) the di culty and...crucial. In this research, particle swarm optimization, a population- based optimization method was applied. Digital ltering and wavelet processing methods

  20. An application of business process method to the clinical efficiency of hospital.

    PubMed

    Leu, Jun-Der; Huang, Yu-Tsung

    2011-06-01

    The concept of Total Quality Management (TQM) has come to be applied in healthcare over the last few years. The process management category in the Baldrige Health Care Criteria for Performance Excellence model is designed to evaluate the quality of medical services. However, a systematic approach for implementation support is necessary to achieve excellence in the healthcare business process. The Architecture of Integrated Information Systems (ARIS) is a business process architecture developed by IDS Scheer AG and has been applied in a variety of industrial application. It starts with a business strategy to identify the core and support processes, and encompasses the whole life-cycle range, from business process design to information system deployment, which is compatible with the concept of healthcare performance excellence criteria. In this research, we apply the basic ARIS framework to optimize the clinical processes of an emergency department in a mid-size hospital with 300 clinical beds while considering the characteristics of the healthcare organization. Implementation of the case is described, and 16 months of clinical data are then collected, which are used to study the performance and feasibility of the method. The experience gleaned in this case study can be used a reference for mid-size hospitals with similar business models.

  1. On the residual stress modeling of shot-peened AISI 4340 steel: finite element and response surface methods

    NASA Astrophysics Data System (ADS)

    Asgari, Ali; Dehestani, Pouya; Poruraminaie, Iman

    2018-02-01

    Shot peening is a well-known process in applying the residual stress on the surface of industrial parts. The induced residual stress improves fatigue life. In this study, the effects of shot peening parameters such as shot diameter, shot speed, friction coefficient, and the number of impacts on the applied residual stress will be evaluated. To assess these parameters effect, firstly the shot peening process has been simulated by finite element method. Then, effects of the process parameters on the residual stress have been evaluated by response surface method as a statistical approach. Finally, a strong model is presented to predict the maximum residual stress induced by shot peening process in AISI 4340 steel. Also, the optimum parameters for the maximum residual stress are achieved. The results indicate that effect of shot diameter on the induced residual stress is increased by increasing the shot speed. Also, enhancing the friction coefficient magnitude always cannot lead to increase in the residual stress.

  2. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    PubMed

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  3. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less

  4. Optimization of rotor shaft shrink fit method for motor using "Robust design"

    NASA Astrophysics Data System (ADS)

    Toma, Eiji

    2018-01-01

    This research is collaborative investigation with the general-purpose motor manufacturer. To review construction method in production process, we applied the parameter design method of quality engineering and tried to approach the optimization of construction method. Conventionally, press-fitting method has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of optimization design of "shrink fitting method by high-frequency induction heating" devised as a new construction method, its construction method was feasible, and it was possible to extract the optimum processing condition.

  5. Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes.

    PubMed

    Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D

    2016-10-01

    This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. © The Author 2016. Published by Oxford University Press.

  6. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  7. High-throughput screening of high Monascus pigment-producing strain based on digital image processing.

    PubMed

    Xia, Meng-lei; Wang, Lan; Yang, Zhi-xia; Chen, Hong-zhang

    2016-04-01

    This work proposed a new method which applied image processing and support vector machine (SVM) for screening of mold strains. Taking Monascus as example, morphological characteristics of Monascus colony were quantified by image processing. And the association between the characteristics and pigment production capability was determined by SVM. On this basis, a highly automated screening strategy was achieved. The accuracy of the proposed strategy is 80.6 %, which is compatible with the existing methods (81.1 % for microplate and 85.4 % for flask). Meanwhile, the screening of 500 colonies only takes 20-30 min, which is the highest rate among all published results. By applying this automated method, 13 strains with high-predicted production were obtained and the best one produced as 2.8-fold (226 U/mL) of pigment and 1.9-fold (51 mg/L) of lovastatin compared with the parent strain. The current study provides us with an effective and promising method for strain improvement.

  8. Room acoustics analysis using circular arrays: an experimental study based on sound field plane-wave decomposition.

    PubMed

    Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo

    2013-04-01

    Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.

  9. Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification

    NASA Astrophysics Data System (ADS)

    Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona

    2016-08-01

    The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.

  10. Laser processing for manufacturing nanocarbon materials

    NASA Astrophysics Data System (ADS)

    Van, Hai Hoang

    CNTs have been considered as the excellent candidate to revolutionize a broad range of applications. There have been many method developed to manipulate the chemistry and the structure of CNTs. Laser with non-contact treatment capability exhibits many processing advantages, including solid-state treatment, extremely fast processing rate, and high processing resolution. In addition, the outstanding monochromatic, coherent, and directional beam generates the powerful energy absorption and the resultant extreme processing conditions. In my research, a unique laser scanning method was developed to process CNTs, controlling the oxidation and the graphitization. The achieved controllability of this method was applied to address the important issues of the current CNT processing methods for three applications. The controllable oxidation of CNTs by laser scanning method was applied to cut CNT films to produce high-performance cathodes for FE devices. The production method includes two important self-developed techniques to produce the cold cathodes: the production of highly oriented and uniformly distributed CNT sheets and the precise laser trimming process. Laser cutting is the unique method to produce the cathodes with remarkable features, including ultrathin freestanding structure (~200 nm), greatly high aspect ratio, hybrid CNT-GNR emitter arrays, even emitter separation, and directional emitter alignment. This unique cathode structure was unachievable by other methods. The developed FE devices successfully solved the screening effect issue encounter by current FE devices. The laser-control oxidation method was further developed to sequentially remove graphitic walls of CNTs. The laser oxidation process was directed to occur along the CNT axes by the laser scanning direction. Additionally, the oxidation was further assisted by the curvature stress and the thermal expansion of the graphitic nanotubes, ultimately opening (namely unzipping) the tubular structure to produce GNRs. Therefore the developed laser scanning method optimally exploited the thermal laser-CNT interaction, successfully transforming CNTs into 2D GNRs. The solid-state laser unzipping process effectively addressed the issues of contamination and scalability encountered by the current unzipping methods. Additionally, the produced GNRs were uniquely featured with the freestanding structure and the smooth surfaces. If the scanning process was performed in an inert environment without the appearance of oxygen, the oxidation of CNTs would not happen. Instead, the greatly mobile carbon atoms of the heated CNTs would reorganize the crystal structure, inducing the graphitization process to improve the crystallinity. Many observations showing the structural improvement of CNTs under laser irradiation has been reported, confirming the capability of laser to heal graphitic defects. Laser methods were more time-efficient and energy-efficient than other annealing methods because laser can quickly heat CNTs to generate graphitization in less than one second. This subsecond heating process of laser irradiation was more effective than other heating methods because it avoided the undesired coalescence of CNTs. In my research, the laser scanning method was applied to generate the graphitization, healing the structural defects of CNTs. Different from the reported laser methods, the laser scanning directed the locally annealed areas to move along the CNT axes, migrating and coalescencing the graphitic defects to achieve better healing results. The critical information describing the CNT structural transformation caused by the moving laser irradiation was explored from the successful applications of the developed laser method. This knowledge inspires an important method to modifiy the general graphitic structure for important applications, such as carbon fiber production, CNT self-assembly process and CNT welding. This method will be effective, facile, versatile, and adaptable for laboratory and industrial facilities.

  11. Review of surface steam sterilization for validation purposes.

    PubMed

    van Doornmalen, Joost; Kopinga, Klaas

    2008-03-01

    Sterilization is an essential step in the process of producing sterile medical devices. To guarantee sterility, the process of sterilization must be validated. Because there is no direct way to measure sterility, the techniques applied to validate the sterilization process are based on statistical principles. Steam sterilization is the most frequently applied sterilization method worldwide and can be validated either by indicators (chemical or biological) or physical measurements. The steam sterilization conditions are described in the literature. Starting from these conditions, criteria for the validation of steam sterilization are derived and can be described in terms of physical parameters. Physical validation of steam sterilization appears to be an adequate and efficient validation method that could be considered as an alternative for indicator validation. Moreover, physical validation can be used for effective troubleshooting in steam sterilizing processes.

  12. Method of Determining the Filtration Properties of oil-Bearing Crops in the Process of Their Pressing by the Example of Rape-oil Extrusion

    NASA Astrophysics Data System (ADS)

    Slavnov, E. V.; Petrov, I. A.

    2014-07-01

    A method of determining the change in the fi ltration properties of oil-bearing crops in the process of their pressing by repeated dynamic loading is proposed. The use of this method is demonstrated by the example of rape-oil extrusion. It was established that the change in the mass concentration of the oil in a rape mix from 0.45 to 0.23 leads to a decrease in the permeability of the mix by 101.5-102 times depending on the pressure applied to it. It is shown that the dependence of the permeability of this mix on the pressure applied to it is nonmonotone in character.

  13. Optical Fourier diffractometry applied to degraded bone structure recognition

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Godwod, Krzysztof; Szawdyn, Jacek; Sawicki, Andrzej

    1993-09-01

    Image processing and recognition methods are useful in many fields. This paper presents the hybrid optical and digital method applied to recognition of pathological changes in bones involved by metabolic bone diseases. The trabecular bone structure, registered by x ray on the photographic film, is analyzed in the new type of computer controlled diffractometer. The set of image parameters, extracted from diffractogram, is evaluated by statistical analysis. The synthetic image descriptors in discriminant space, constructed on the base of 3 training groups of images (control, osteoporosis, and osteomalacia groups) by discriminant analysis, allow us to recognize bone samples with degraded bone structure and to recognize the disease. About 89% of the images were classified correctly. This method after optimization process will be verified in medical investigations.

  14. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  15. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  16. Development and in-line validation of a Process Analytical Technology to facilitate the scale up of coating processes.

    PubMed

    Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P

    2013-05-05

    Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Characterization of biogenic ferrihydrite nanoparticles by means of SAXS, SRD and IBA methods

    NASA Astrophysics Data System (ADS)

    Balasoiu, M.; Kichanov, S.; Pantelica, A.; Pantelica, D.; Stolyar, S.; Iskhakov, R.; Aranghel, D.; Ionescu, P.; Badita, C. R.; Kurkin, S.; Orelovich, O.; Tiutiunikov, S.

    2018-03-01

    Investigations of biogenic ferrihydrite nanoparticles produced by bacteria Klebsiella oxytoca by applying small angle X-ray scattering, synchrotron radiation diffraction and ion beam analysis methods are reviewed. Different experimental data processing methods are used and analyzed.

  18. SEIPS-based process modeling in primary care.

    PubMed

    Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T

    2017-04-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. SEIPS-Based Process Modeling in Primary Care

    PubMed Central

    Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter

    2016-01-01

    Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883

  20. Multi-electrolyte-step anodic aluminum oxide method for the fabrication of self-organized nanochannel arrays

    PubMed Central

    2012-01-01

    Nanochannel arrays were fabricated by the self-organized multi-electrolyte-step anodic aluminum oxide [AAO] method in this study. The anodization conditions used in the multi-electrolyte-step AAO method included a phosphoric acid solution as the electrolyte and an applied high voltage. There was a change in the phosphoric acid by the oxalic acid solution as the electrolyte and the applied low voltage. This method was used to produce self-organized nanochannel arrays with good regularity and circularity, meaning less power loss and processing time than with the multi-step AAO method. PMID:22333268

  1. New disinfection and sterilization methods.

    PubMed Central

    Rutala, W. A.; Weber, D. J.

    2001-01-01

    New disinfection methods include a persistent antimicrobial coating that can be applied to inanimate and animate objects (Surfacine), a high-level disinfectant with reduced exposure time (ortho-phthalaldehyde), and an antimicrobial agent that can be applied to animate and inanimate objects (superoxidized water). New sterilization methods include a chemical sterilization process for endoscopes that integrates cleaning (Endoclens), a rapid (4-hour) readout biological indicator for ethylene oxide sterilization (Attest), and a hydrogen peroxide plasma sterilizer that has a shorter cycle time and improved efficacy (Sterrad 50). PMID:11294738

  2. The Effects of Jigsaw Technique Based on Cooperative Learning on Prospective Science Teachers' Science Process Skill

    ERIC Educational Resources Information Center

    Karacop, Ataman; Diken, Emine Hatun

    2017-01-01

    The purpose of this study is to investigate the effects of laboratory approach based on jigsaw method with cooperative learning and confirmatory laboratory approach on university students' cognitive process development in Science teaching laboratory applications, and to determine the opinions of the students on applied laboratory methods. The…

  3. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  4. Multi-ball and one-ball geolocation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.; Townsend, J. L.

    2017-05-01

    We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. In this article, we address several problems including accurate TDOA and FDOA estimation methods that do not require searching a two dimensional surface such as the cross-ambiguity surface. As an example, we apply these methods to identify and process AIS pulses from a single emitter, making it possible to geolocate the AIS signal using a single moving receiver.

  5. Automated segmentation of serous pigment epithelium detachment in SD-OCT images

    NASA Astrophysics Data System (ADS)

    Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2015-03-01

    Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.

  6. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  7. Method of measuring metal coating adhesion

    DOEpatents

    Roper, J.R.

    A method for measuring metal coating adhesion to a substrate material comprising the steps of preparing a test coupon of substrate material having the metal coating applied to one surface thereof, applying a second metal coating of gold or silver to opposite surfaces of the test coupon by hot hollow cathode process, applying a coating to one end of each of two pulling rod members, joining the coated ends of the pulling rod members to said opposite coated surfaces of the test coupon by a solid state bonding technique and finally applying instrumented static tensile loading to the pulling rod members until fracture of the metal coating adhesion to the substrate material occurs.

  8. Method of measuring metal coating adhesion

    DOEpatents

    Roper, John R.

    1985-01-01

    A method for measuring metal coating adhesion to a substrate material comprising the steps of preparing a test coupon of substrate material having the metal coating applied to one surface thereof, applying a second metal coating of gold or silver to opposite surfaces of the test coupon by hot hollow cathode process, applying a coating to one end of each of two pulling rod members, joining the coated ends of the pulling rod members to said opposite coated surfaces of the test coupon by a solid state bonding technique and finally applying instrumented static tensile loading to the pulling rod members until fracture of the metal coating adhesion to the substrate material occurs.

  9. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  10. Scientific computations section monthly report, November 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckner, M.R.

    1993-12-30

    This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

  11. Computational fluid dynamics modelling of hydraulics and sedimentation in process reactors during aeration tank settling.

    PubMed

    Jensen, M D; Ingildsen, P; Rasmussen, M R; Laursen, J

    2006-01-01

    Aeration tank settling is a control method allowing settling in the process tank during high hydraulic load. The control method is patented. Aeration tank settling has been applied in several waste water treatment plants using the present design of the process tanks. Some process tank designs have shown to be more effective than others. To improve the design of less effective plants, computational fluid dynamics (CFD) modelling of hydraulics and sedimentation has been applied. This paper discusses the results at one particular plant experiencing problems with partly short-circuiting of the inlet and outlet causing a disruption of the sludge blanket at the outlet and thereby reducing the retention of sludge in the process tank. The model has allowed us to establish a clear picture of the problems arising at the plant during aeration tank settling. Secondly, several process tank design changes have been suggested and tested by means of computational fluid dynamics modelling. The most promising design changes have been found and reported.

  12. Measurement and analysis of applied power, forces and material response in friction stir welding of aluminum alloy 6061

    NASA Astrophysics Data System (ADS)

    Avila, Ricardo E.

    The process of Friction Stir Welding (FSW) 6061 aluminum alloy is investigated, with focus on the forces and power being applied in the process and the material response. The main objective is to relate measurements of the forces and power applied in the process with mechanical properties of the material during the dynamic process, based on mathematical modeling and aided by computer simulations, using the LS-DYNA software for finite element modeling. Results of measurements of applied forces and power are presented. The result obtained for applied power is used in the construction of a mechanical variational model of FSW, in which minimization of a functional for the applied torque is sought, leading to an expression for shear stress in the material. The computer simulations are performed by application of the Smoothed Particle Hydrodynamics (SPH) method, in which no structured finite element mesh is used to construct a spatial discretization of the model. The current implementation of SPH in LS-DYNA allows a structural solution using a plastic kinematic material model. This work produces information useful to improve understanding of the material flow in the process, and thus adds to current knowledge about the behavior of materials under processes of severe plastic deformation, particularly those processes in which deformation occurs mainly by application of shear stress, aided by thermoplastic strain localization and dynamic recrystallization.

  13. Novel Materials through Non-Hydrolytic Sol-Gel Processing: Negative Thermal Expansion Oxides and Beyond

    PubMed Central

    Lind, Cora; Gates, Stacy D.; Pedoussaut, Nathalie M.; Baiz, Tamam I.

    2010-01-01

    Low temperature methods have been applied to the synthesis of many advanced materials. Non-hydrolytic sol-gel (NHSG) processes offer an elegant route to stable and metastable phases at low temperatures. Excellent atomic level homogeneity gives access to polymorphs that are difficult or impossible to obtain by other methods. The NHSG approach is most commonly applied to the preparation of metal oxides, but can be easily extended to metal sulfides. Exploration of experimental variables allows control over product stoichiometry and crystal structure. This paper reviews the application of NHSG chemistry to the synthesis of negative thermal expansion oxides and selected metal sulfides.

  14. Quality assurance and management in microelectronics companies: ISO 9000 versus Six Sigma

    NASA Astrophysics Data System (ADS)

    Lupan, Razvan; Kobi, Abdessamad; Robledo, Christian; Bacivarov, Ioan; Bacivarov, Angelica

    2009-01-01

    A strategy for the implementation of the Six Sigma method as an improvement solution for the ISO 9000:2000 Quality Standard is proposed. Our approach is focused on integrating the DMAIC cycle of the Six Sigma method with the PDCA process approach, highly recommended by the standard ISO 9000:2000. The Six Sigma steps applied to each part of the PDCA cycle are presented in detail, giving some tools and training examples. Based on this analysis the authors conclude that applying Six Sigma philosophy to the Quality Standard implementation process is the best way to achieve the optimal results in quality progress and therefore in customers satisfaction.

  15. An Evaluation Model Applied to a Mathematics-Methods Program Involving Three Characteristics of Teaching Style and Their Relationship to Pupil Achievement. Teacher Education Forum; Volume 3, Number 4.

    ERIC Educational Resources Information Center

    Dodd, Carol Ann

    This study explores a technique for evaluating teacher education programs in terms of teaching competencies, as applied to the Indiana University Mathematics Methods Program (MMP). The evaluation procedures formulated for the study include a process product design in combination with a modification of Pophan's performance test paradigm and Gage's…

  16. Water supply management using an extended group fuzzy decision-making method: a case study in north-eastern Iran

    NASA Astrophysics Data System (ADS)

    Minatour, Yasser; Bonakdari, Hossein; Zarghami, Mahdi; Bakhshi, Maryam Ali

    2015-09-01

    The purpose of this study was to develop a group fuzzy multi-criteria decision-making method to be applied in rating problems associated with water resources management. Thus, here Chen's group fuzzy TOPSIS method extended by a difference technique to handle uncertainties of applying a group decision making. Then, the extended group fuzzy TOPSIS method combined with a consistency check. In the presented method, initially linguistic judgments are being surveyed via a consistency checking process, and afterward these judgments are being used in the extended Chen's fuzzy TOPSIS method. Here, each expert's opinion is turned to accurate mathematical numbers and, then, to apply uncertainties, the opinions of group are turned to fuzzy numbers using three mathematical operators. The proposed method is applied to select the optimal strategy for the rural water supply of Nohoor village in north-eastern Iran, as a case study and illustrated example. Sensitivity analyses test over results and comparing results with project reality showed that proposed method offered good results for water resources projects.

  17. From IHE Audit Trails to XES Event Logs Facilitating Process Mining.

    PubMed

    Paster, Ferdinand; Helm, Emmanuel

    2015-01-01

    Recently Business Intelligence approaches like process mining are applied to the healthcare domain. The goal of process mining is to gain process knowledge, compliance and room for improvement by investigating recorded event data. Previous approaches focused on process discovery by event data from various specific systems. IHE, as a globally recognized basis for healthcare information systems, defines in its ATNA profile how real-world events must be recorded in centralized event logs. The following approach presents how audit trails collected by the means of ATNA can be transformed to enable process mining. Using the standardized audit trails provides the ability to apply these methods to all IHE based information systems.

  18. A study in the founding of applied behavior analysis through its publications.

    PubMed

    Morris, Edward K; Altus, Deborah E; Smith, Nathaniel G

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research.

  19. A Study in the Founding of Applied Behavior Analysis Through Its Publications

    PubMed Central

    Morris, Edward K.; Altus, Deborah E.; Smith, Nathaniel G.

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research. PMID:25729133

  20. Quality assessment of raw and processed Arctium lappa L. through multicomponent quantification, chromatographic fingerprint, and related chemometric analysis.

    PubMed

    Qin, Kunming; Wang, Bin; Li, Weidong; Cai, Hao; Chen, Danni; Liu, Xiao; Yin, Fangzhou; Cai, Baochang

    2015-05-01

    In traditional Chinese medicine, raw and processed herbs are used to treat different diseases. Suitable quality assessment methods are crucial for the discrimination between raw and processed herbs. The dried fruit of Arctium lappa L. and their processed products are widely used in traditional Chinese medicine, yet their therapeutic effects are different. In this study, a novel strategy using high-performance liquid chromatography and diode array detection coupled with multivariate statistical analysis to rapidly explore raw and processed Arctium lappa L. was proposed and validated. Four main components in a total of 30 batches of raw and processed Fructus Arctii samples were analyzed, and ten characteristic peaks were identified in the fingerprint common pattern. Furthermore, similarity evaluation, principal component analysis, and hierachical cluster analysis were applied to demonstrate the distinction. The results suggested that the relative amounts of the chemical components of raw and processed Fructus Arctii samples are different. This new method has been successfully applied to detect the raw and processed Fructus Arctii in marketed herbal medicinal products. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Research on the Intensive Material Management System of Biomass Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Ruosi; Hao, Tianyi; Li, Yunxiao; Zhang, Fangqing; Ding, Sheng

    2017-05-01

    In view of the universal problem which the material management is loose, and lack of standardization and interactive real-time in the biomass power plant, a system based on the method of intensive management is proposed in this paper to control the whole process of power plant material. By analysing the whole process of power plant material management and applying the Internet of Things, the method can simplify the management process. By making use of the resources to maximize and data mining, material utilization, circulation rate and quality control management can be improved. The system has been applied in Gaotang power plant, which raised the level of materials management and economic effectiveness greatly. It has an important significance for safe, cost-effective and highly efficient operation of the plant.

  2. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method

    PubMed Central

    Zhang, Tingting; Kou, S. C.

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615

  3. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    PubMed

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  4. Applying Knowledge Discovery in Databases in Public Health Data Set: Challenges and Concerns

    PubMed Central

    Volrathongchia, Kanittha

    2003-01-01

    In attempting to apply Knowledge Discovery in Databases (KDD) to generate a predictive model from a health care dataset that is currently available to the public, the first step is to pre-process the data to overcome the challenges of missing data, redundant observations, and records containing inaccurate data. This study will demonstrate how to use simple pre-processing methods to improve the quality of input data. PMID:14728545

  5. Participatory Design in Gerontechnology: A Systematic Literature Review.

    PubMed

    Merkel, Sebastian; Kucharski, Alexander

    2018-05-19

    Participatory design (PD) is widely used within gerontechnology but there is no common understanding about which methods are used for what purposes. This review aims to examine what different forms of PD exist in the field of gerontechnology and how these can be categorized. We conducted a systematic literature review covering several databases. The search strategy was based on 3 elements: (1) participatory methods and approaches with (2) older persons aiming at developing (3) technology for older people. Our final review included 26 studies representing a variety of technologies designed/developed and methods/instruments applied. According to the technologies, the publications reviewed can be categorized in 3 groups: Studies that (1) use already existing technology with the aim to find new ways of use; (2) aim at creating new devices; (3) test and/or modify prototypes. The implementation of PD depends on the questions: Why a participatory approach is applied, who is involved as future user(s), when those future users are involved, and how they are incorporated into the innovation process. There are multiple ways, methods, and instruments to integrate users into the innovation process. Which methods should be applied, depends on the context. However, most studies do not evaluate if participatory approaches will lead to a better acceptance and/or use of the co-developed products. Therefore, participatory design should follow a comprehensive strategy, starting with the users' needs and ending with an evaluation if the applied methods have led to better results.

  6. 40 CFR 412.2 - General definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... also cites the approved methods of analysis. (d) Process wastewater means water directly or indirectly... facilities; direct contact swimming, washing, or spray cooling of animals; or dust control. Process... process wastewater from the production area is or may be applied. (f) New source is defined at 40 CFR 122...

  7. 40 CFR 412.2 - General definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... also cites the approved methods of analysis. (d) Process wastewater means water directly or indirectly... facilities; direct contact swimming, washing, or spray cooling of animals; or dust control. Process... process wastewater from the production area is or may be applied. (f) New source is defined at 40 CFR 122...

  8. 40 CFR 412.2 - General definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... also cites the approved methods of analysis. (d) Process wastewater means water directly or indirectly... facilities; direct contact swimming, washing, or spray cooling of animals; or dust control. Process... process wastewater from the production area is or may be applied. (f) New source is defined at 40 CFR 122...

  9. Nonequilibrium radiative heating prediction method for aeroassist flowfields with coupling to flowfield solvers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiation adsorption and emission coefficients in thermochemical nonequilibrium flows is developed. The method is called the Langley optimized radiative nonequilibrium code (LORAN). It applies the smeared band approximation for molecular radiation to produce moderately detailed results and is intended to fill the gap between detailed but costly prediction methods and very fast but highly approximate methods. The optimization of the method to provide efficient solutions allowing coupling to flowfield solvers is discussed. Representative results are obtained and compared to previous nonequilibrium radiation methods, as well as to ground- and flight-measured data. Reasonable agreement is found in all cases. A multidimensional radiative transport method is also developed for axisymmetric flows. Its predictions for wall radiative flux are 20 to 25 percent lower than those of the tangent slab transport method, as expected, though additional investigation of the symmetry and outflow boundary conditions is indicated. The method was applied to the peak heating condition of the aeroassist flight experiment (AFE) trajectory, with results comparable to predictions from other methods. The LORAN method was also applied in conjunction with the computational fluid dynamics (CFD) code LAURA to study the sensitivity of the radiative heating prediction to various models used in nonequilibrium CFD. This study suggests that radiation measurements can provide diagnostic information about the detailed processes occurring in a nonequilibrium flowfield because radiation phenomena are very sensitive to these processes.

  10. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    NASA Astrophysics Data System (ADS)

    1995-10-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  11. University of Tennessee Center for Space Transportation and Applied Research (CSTAR)

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Center for Space Transportation and Applied Research had projects with space applications in six major areas: laser materials processing, artificial intelligence/expert systems, space transportation, computational methods, chemical propulsion, and electric propulsion. The closeout status of all these projects is addressed.

  12. Modelling Peri-Perceptual Brain Processes in a Deep Learning Spiking Neural Network Architecture.

    PubMed

    Gholami Doborjeh, Zohreh; Kasabov, Nikola; Gholami Doborjeh, Maryam; Sumich, Alexander

    2018-06-11

    Familiarity of marketing stimuli may affect consumer behaviour at a peri-perceptual processing level. The current study introduces a method for deep learning of electroencephalogram (EEG) data using a spiking neural network (SNN) approach that reveals the complexity of peri-perceptual processes of familiarity. The method is applied to data from 20 participants viewing familiar and unfamiliar logos. The results support the potential of SNN models as novel tools in the exploration of peri-perceptual mechanisms that respond differentially to familiar and unfamiliar stimuli. Specifically, the activation pattern of the time-locked response identified by the proposed SNN model at approximately 200 milliseconds post-stimulus suggests greater connectivity and more widespread dynamic spatio-temporal patterns for familiar than unfamiliar logos. The proposed SNN approach can be applied to study other peri-perceptual or perceptual brain processes in cognitive and computational neuroscience.

  13. Preparation of highly hydrophobic cotton fabrics by modification with bifunctional silsesquioxanes in the sol-gel process

    NASA Astrophysics Data System (ADS)

    Przybylak, Marcin; Maciejewski, Hieronim; Dutkiewicz, Agnieszka

    2016-11-01

    The surface modification of cotton fabrics was carried out using two types of bifunctional fluorinated silsesquioxanes with different ratios of functional groups. The modification was performed either by one- or two-step process. Two methods, the sol-gel and the dip coating method were used in different configurations. The heat treatment and the washing process were applied after modification. The wettability of cotton fabric was evaluated by measuring water contact angles (WCA). Changes in the surface morphology were examined by scanning electron microscopy (SEM, SEM-LFD) and atomic force microscopy (AFM). Moreover, the modified fabrics were subjected to analysis of elemental composition of the applied coatings using SEM-EDS techniques. Highly hydrophobic textiles were obtained in all cases studied and one of the modifications resulted in imparting superhydrophobic properties. Most of impregnated textiles remained hydrophobic even after multiple washing process which shows that the studied modification is durable.

  14. Research on a lubricating grease print process for cylindrical cylinder

    NASA Astrophysics Data System (ADS)

    Yang, Liu; Zhang, Xuan; Wang, XianYan; Tan, XiaoYan

    2017-09-01

    In vehicle braking system and clutch system of transmission, there is always a kind of cylindrical component dose reciprocating motion. The main working method is the reciprocating motion between the rubber sealing parts and cylindrical parts, the main factor affects the service life of the product is the lubricating performance of the moving parts. So the lubricating performance between cylinders and rubber sealing rings is particularly important, same as the quality of the grease applies on the surface of the surface of cylinder. Traditional method of manually applying grease has some defects such as applying unevenly, applying tools like brush and cloth easily falls off and affect the cleanness of products, contact skin easily cause allergy, waste grease due to the uncontrollable of grease quantity using in applying, low efficiency of manual operation. An automatic, quantitative and high pressure applying equipment is introduced in this document to replace the traditional manually applying method, which can guarantee the applying quality of the grease which are painted on the surface of cylinder and bring economic benefits to the company.

  15. Method to measure soil matrix infiltration in forest soil

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Lei, Tingwu; Qu, Liqin; Chen, Ping; Gao, Xiaofeng; Chen, Chao; Yuan, Lili; Zhang, Manliang; Su, Guangxu

    2017-09-01

    Infiltration of water into forest soil commonly involves infiltration through the matrix body and preferential passages. Determining the matrix infiltration process is important in partitioning water infiltrating into the soil through the soil body and macropores to evaluate the effects of soil and water conservation practices on hillslope hydrology and watershed sedimentation. A new method that employs a double-ring infiltrometer was applied in this study to determine the matrix infiltration process in forest soil. Field experiments were conducted in a forest field on the Loess Plateau at Tianshui Soil and Water Conservation Experimental Station. Nylon cloth was placed on the soil surface in the inner ring and between the inner and outer rings of infiltrometers. A thin layer of fine sands were placed onto the nylon cloth to shelter the macropores and ensure that water infiltrates the soil through the matrix only. Brilliant Blue tracers were applied to examine the exclusion of preferential flow occurrences in the measured soil body. The infiltration process was measured, computed, and recorded through procedures similar to those of conventional methods. Horizontal and vertical soil profiles were excavated to check the success of the experiment and ensure that preferential flow did not occur in the measured soil column and that infiltration was only through the soil matrix. The infiltration processes of the replicates of five plots were roughly the same, thereby indicating the feasibility of the methodology to measure soil matrix infiltration. The measured infiltration curves effectively explained the transient process of soil matrix infiltration. Philip and Kostiakov models fitted the measured data well, and all the coefficients of determination were greater than 0.9. The wetted soil bodies through excavations did not present evidence of preferential flow. Therefore, the proposed method can determine the infiltration process through the forest soil matrix. This method can also be applied to explore matrix infiltration in other land-use types.

  16. Defect recognition in CFRP components using various NDT methods within a smart manufacturing process

    NASA Astrophysics Data System (ADS)

    Schumacher, David; Meyendorf, Norbert; Hakim, Issa; Ewert, Uwe

    2018-04-01

    The manufacturing process of carbon fiber reinforced polymer (CFRP) components is gaining a more and more significant role when looking at the increasing amount of CFRPs used in industries today. The monitoring of the manufacturing process and hence the reliability of the manufactured products, is one of the major challenges we need to face in the near future. Common defects which arise during manufacturing process are e.g. porosity and voids which may lead to delaminations during operation and under load. To find irregularities and classify them as possible defects in an early stage of the manufacturing process is of high importance for the safety and reliability of the finished products, as well as of significant impact from an economical point of view. In this study we compare various NDT methods which were applied to similar CFRP laminate samples in order to detect and characterize regions of defective volume. Besides ultrasound, thermography and eddy current, different X-ray methods like radiography, laminography and computed tomography are used to investigate the samples. These methods are compared with the intention to evaluate their capability to reliably detect and characterize defective volume. Beyond the detection and evaluation of defects, we also investigate possibilities to combine various NDT methods within a smart manufacturing process in which the decision which method shall be applied is inherent within the process. Is it possible to design an in-line or at-line testing process which can recognize defects reliably and reduce testing time and costs? This study aims to show up opportunities of designing a smart NDT process synchronized to the production based on the concepts of smart production (Industry 4.0). A set of defective CFRP laminate samples and different NDT methods were used to demonstrate how effective defects are recognized and how communication between interconnected NDT sensors and the manufacturing process could be organized.

  17. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  18. Compression-RSA: New approach of encryption and decryption method

    NASA Astrophysics Data System (ADS)

    Hung, Chang Ee; Mandangan, Arif

    2013-04-01

    Rivest-Shamir-Adleman (RSA) cryptosystem is a well known asymmetric cryptosystem and it has been applied in a very wide area. Many researches with different approaches have been carried out in order to improve the security and performance of RSA cryptosystem. The enhancement of the performance of RSA cryptosystem is our main interest. In this paper, we propose a new method to increase the efficiency of RSA by shortening the number of plaintext before it goes under encryption process without affecting the original content of the plaintext. Concept of simple Continued Fraction and the new special relationship between it and Euclidean Algorithm have been applied on this newly proposed method. By reducing the number of plaintext-ciphertext, the encryption-decryption processes of a secret message can be accelerated.

  19. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    NASA Astrophysics Data System (ADS)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  20. Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.

    PubMed

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.

  1. Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base

    PubMed Central

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146

  2. Dynamic Exergy Method for Evaluating the Control and Operation of Oxy-Combustion Boiler Island Systems.

    PubMed

    Jin, Bo; Zhao, Haibo; Zheng, Chuguang; Liang, Zhiwu

    2017-01-03

    Exergy-based methods are widely applied to assess the performance of energy conversion systems; however, these methods mainly focus on a certain steady-state and have limited applications for evaluating the control impacts on system operation. To dynamically obtain the thermodynamic behavior and reveal the influences of control structures, layers and loops, on system energy performance, a dynamic exergy method is developed, improved, and applied to a complex oxy-combustion boiler island system for the first time. The three most common operating scenarios are studied, and the results show that the flow rate change process leads to less energy consumption than oxygen purity and air in-leakage change processes. The variation of oxygen purity produces the largest impact on system operation, and the operating parameter sensitivity is not affected by the presence of process control. The control system saves energy during flow rate and oxygen purity change processes, while it consumes energy during the air in-leakage change process. More attention should be paid to the oxygen purity change because it requires the largest control cost. In the control system, the supervisory control layer requires the greatest energy consumption and the largest control cost to maintain operating targets, while the steam control loops cause the main energy consumption.

  3. Unsupervised Approaches for Post-Processing in Computationally Efficient Waveform-Similarity-Based Earthquake Detection

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Yoon, C. E.; OReilly, O. J.; Beroza, G. C.

    2015-12-01

    Recent improvements in computational efficiency for waveform correlation-based detections achieved by new methods such as Fingerprint and Similarity Thresholding (FAST) promise to allow large-scale blind search for similar waveforms in long-duration continuous seismic data. Waveform similarity search applied to datasets of months to years of continuous seismic data will identify significantly more events than traditional detection methods. With the anticipated increase in number of detections and associated increase in false positives, manual inspection of the detection results will become infeasible. This motivates the need for new approaches to process the output of similarity-based detection. We explore data mining techniques for improved detection post-processing. We approach this by considering similarity-detector output as a sparse similarity graph with candidate events as vertices and similarities as weighted edges. Image processing techniques are leveraged to define candidate events and combine results individually processed at multiple stations. Clustering and graph analysis methods are used to identify groups of similar waveforms and assign a confidence score to candidate detections. Anomaly detection and classification are applied to waveform data for additional false detection removal. A comparison of methods will be presented and their performance will be demonstrated on a suspected induced and non-induced earthquake sequence.

  4. Six Sigma methods applied to cryogenic coolers assembly line

    NASA Astrophysics Data System (ADS)

    Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René

    2009-05-01

    Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.

  5. Non-invasive imaging methods applied to neo- and paleontological cephalopod research

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2013-11-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.

  6. Applying TM-polarization geoelectric exploration for study of low-contrast three-dimensional targets

    NASA Astrophysics Data System (ADS)

    Zlobinskiy, Arkadiy; Mogilatov, Vladimir; Shishmarev, Roman

    2018-03-01

    With using new field and theoretical data, it has been shown that applying the electromagnetic field of transverse magnetic (TM) polarization will give new opportunities for electrical prospecting by the method of transient processes. Only applying a pure field of the TM polarization permits poor three-dimensional objects (required metalliferous deposits) to be revealed in a host horizontally-layered medium. This position has good theoretical grounds. There is given the description of the transient electromagnetic method, that uses only the TM polarization field. The pure TM mode is excited by a special source, which is termed as a circular electric dipole (CED). The results of three-dimensional simulation (by the method of finite elements) are discussed for three real geological situations for which applying electromagnetic fields of transverse electric (TE) and transverse magnetic (TM) polarizations are compared. It has been shown that applying the TE mode gives no positive results, while applying the TM polarization field permits the problem to be tackled. Finally, the results of field works are offered, which showed inefficiency of application of the classical TEM method, whereas in contrast, applying the field of TM polarization makes it easy to identify the target.

  7. School Success as a Process of Structuration

    ERIC Educational Resources Information Center

    Tubin, Dorit

    2015-01-01

    Purpose: The purpose of the present study is to explore the process, routines, and structuration at successful schools leading their students to high achievements. Method: The approach of building a theory from case study research together with process perspective and an organizational routines model were applied to analyzing seven successful…

  8. Mining knowledge in noisy audio data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czyzewski, A.

    1996-12-31

    This paper demonstrates a KDD method applied to audio data analysis, particularly, it presents possibilities which result from replacing traditional methods of analysis and acoustic signal processing by KDD algorithms when restoring audio recordings affected by strong noise.

  9. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  10. A method for validation of finite element forming simulation on basis of a pointwise comparison of distance and curvature

    NASA Astrophysics Data System (ADS)

    Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank

    2016-10-01

    Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.

  11. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  12. A Q-Ising model application for linear-time image segmentation

    NASA Astrophysics Data System (ADS)

    Bentrem, Frank W.

    2010-10-01

    A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.

  13. Targeted and untargeted-metabolite profiling to track the compositional integrity of ginger during processing using digitally-enhanced HPTLC pattern recognition analysis.

    PubMed

    Ibrahim, Reham S; Fathy, Hoda

    2018-03-30

    Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.

  14. Investigation of hydrogenation of toluene to methylcyclohexane in a trickle bed reactor by low-field nuclear magnetic resonance spectroscopy.

    PubMed

    Guthausen, Gisela; von Garnier, Agnes; Reimert, Rainer

    2009-10-01

    Low-field nuclear magnetic resonance (NMR) spectroscopy is applied to study the hydrogenation of toluene in a lab-scale reactor. A conventional benchtop NMR system was modified to achieve chemical shift resolution. After an off-line validity check of the approach, the reaction product is analyzed on-line during the process, applying chemometric data processing. The conversion of toluene to methylcyclohexane is compared with off-line gas chromatographic analysis. Both classic analytical and chemometric data processing was applied. As the results, which are obtained within a few tens of seconds, are equivalent within the experimental accuracy of both methods, low-field NMR spectroscopy was shown to provide an analytical tool for reaction characterization and immediate feedback.

  15. Evaluation of the clinical process in a critical care information system using the Lean method: a case study

    PubMed Central

    2012-01-01

    Background There are numerous applications for Health Information Systems (HIS) that support specific tasks in the clinical workflow. The Lean method has been used increasingly to optimize clinical workflows, by removing waste and shortening the delivery cycle time. There are a limited number of studies on Lean applications related to HIS. Therefore, we applied the Lean method to evaluate the clinical processes related to HIS, in order to evaluate its efficiency in removing waste and optimizing the process flow. This paper presents the evaluation findings of these clinical processes, with regards to a critical care information system (CCIS), known as IntelliVue Clinical Information Portfolio (ICIP), and recommends solutions to the problems that were identified during the study. Methods We conducted a case study under actual clinical settings, to investigate how the Lean method can be used to improve the clinical process. We used observations, interviews, and document analysis, to achieve our stated goal. We also applied two tools from the Lean methodology, namely the Value Stream Mapping and the A3 problem-solving tools. We used eVSM software to plot the Value Stream Map and A3 reports. Results We identified a number of problems related to inefficiency and waste in the clinical process, and proposed an improved process model. Conclusions The case study findings show that the Value Stream Mapping and the A3 reports can be used as tools to identify waste and integrate the process steps more efficiently. We also proposed a standardized and improved clinical process model and suggested an integrated information system that combines database and software applications to reduce waste and data redundancy. PMID:23259846

  16. Multilayer ultra thick resist development for MEMS

    NASA Astrophysics Data System (ADS)

    Washio, Yasushi; Senzaki, Takahiro; Masuda, Yasuo; Saito, Koji; Obiya, Hiroyuki

    2005-05-01

    MEMS (Micro-Electro-Mechanical Systems) is achieved through a process technology, called Micro-machining. There are two distinct methods to manufacture a MEMS-product. One method is to form permanent film through photolithography, and the other is to form a non-permanent film resist after photolithography proceeded by etch or plating process. The three-dimensional ultra-fine processing technology based on photolithography, and is assembled by processes, such as anode junction, and post lithography processes such as etching and plating. Currently ORDYL PR-100 (Dry Film Type) is used for the permanent resist process. TOK has developed TMMR S2000 (Liquid Type) and TMMF S2000 (Dry Film Type) also. TOK has developed a new process utilizing these resist. The electro-forming method by photolithography is developed as one of the methods for enabling high resolution and high aspect formation. In recent years, it has become possible to manufacture conventionally difficult multilayer through our development with material and equipment project (M&E). As for material for electro-forming, it was checked that chemically amplified resist is optimal from the reaction mechanism as it is easily removed by the clean solution. Moreover, multiple plating formations were enabled with the resist through a new process. As for the equipment, TOK developed Applicator (It can apply 500 or more μms) and Developer, which achieves high throughput and quality. The detailed plating formations, which a path differs, and air wiring are realizable through M&E. From the above results, opposed to metallic mold plating, electro-forming method by resist, enabled to form high resolution and aspect pattern, at low cost. It is thought that the infinite possibility spreads by applying this process.

  17. A rule-based automatic sleep staging method.

    PubMed

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Reaction of N,N'-dimethylformamide and divalent viologen molecule to generate an organic dopant for molybdenum disulfide

    NASA Astrophysics Data System (ADS)

    Fukui, A.; Miura, K.; Ichimiya, H.; Tsurusaki, A.; Kariya, K.; Yoshimura, T.; Ashida, A.; Fujimura, N.; Kiriya, D.

    2018-05-01

    Tuning the carrier concentration is essential for semiconducting materials to apply optoelectronic devices. Molybdenum disulfide (MoS2) is a semiconducting material composed of atomically thin (˜0.7 nm thickness) layers. To dope thin MoS2, instead of using conventional atom/ion injection processes, a surface charge transfer method was successfully applied. In this study, we report a simple preparation method of a molecular dopant applicable to the doping process. The method follows a previous report for producing a molecular dopant, benzyl viologen (BV) which shows electron doping to MoS2. To prepare dopant BV molecules, a reduction process with a commercially available divalent BV by sodium borohydride (NaBH4) is required; however, the reaction requires a large consumption of NaBH4. NaBH4 drastically reacts with the solvent water itself. We found a reaction process of BV in an organic solvent, N,N'-dimethylformamide (DMF), by adding a small amount of water dissolving the divalent BV. The reaction is mild (at room temperature) and is autonomous once DMF comes into contact with the divalent BV aqueous solution. The reaction can be monitored with a UV-Vis spectrometer, and kinetic analysis indicates two reaction steps between divalent/monovalent/neutral viologen isomers. The product was soluble in toluene and did not dissolve in water, indicating it is similar to the reported dopant BV. The synthesized molecule was found to act as a dopant for MoS2 by applying a metal-oxide-semiconductor field-effect-transistor (MOSFET) structure. The process is a general method and applicable to other viologen-related dopants to tune the electronic structure of 2D materials to facilitate generating atomically thin devices.

  19. Symetrica Measurements at PNNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouzes, Richard T.; Mace, Emily K.; Redding, Rebecca L.

    2009-01-26

    Symetrica is a small company based in Southampton, England, that has developed an algorithm for processing gamma ray spectra obtained from a variety of scintillation detectors. Their analysis method applied to NaI(Tl), BGO, and LaBr spectra results in deconvoluted spectra with the “resolution” improved by about a factor of three to four. This method has also been applied by Symetrica to plastic scintillator with the result that full energy peaks are produced. If this method is valid and operationally viable, it could lead to a significantly improved plastic scintillator based radiation portal monitor system.

  20. Exploration on practice teaching reform of Photoelectric Image Processing course under applied transformation

    NASA Astrophysics Data System (ADS)

    Cao, Binfang; Li, Xiaoqin; Liu, Changqing; Li, Jianqi

    2017-08-01

    With the further applied transformation of local colleges, teachers are urgently needed to make corresponding changes in the teaching content and methods from different courses. The article discusses practice teaching reform of the Photoelectric Image Processing course in the Optoelectronic Information Science and Engineering major. The Digital Signal Processing (DSP) platform is introduced to the experimental teaching. It will mobilize and inspire students and also enhance their learning motivation and innovation through specific examples. The course via teaching practice process has become the most popular course among students, which will further drive students' enthusiasm and confidence to participate in all kinds of electronic competitions.

  1. DISCO-SCA and Properly Applied GSVD as Swinging Methods to Find Common and Distinctive Processes

    PubMed Central

    Van Deun, Katrijn; Van Mechelen, Iven; Thorrez, Lieven; Schouteden, Martijn; De Moor, Bart; van der Werf, Mariët J.; De Lathauwer, Lieven; Smilde, Age K.; Kiers, Henk A. L.

    2012-01-01

    Background In systems biology it is common to obtain for the same set of biological entities information from multiple sources. Examples include expression data for the same set of orthologous genes screened in different organisms and data on the same set of culture samples obtained with different high-throughput techniques. A major challenge is to find the important biological processes underlying the data and to disentangle therein processes common to all data sources and processes distinctive for a specific source. Recently, two promising simultaneous data integration methods have been proposed to attain this goal, namely generalized singular value decomposition (GSVD) and simultaneous component analysis with rotation to common and distinctive components (DISCO-SCA). Results Both theoretical analyses and applications to biologically relevant data show that: (1) straightforward applications of GSVD yield unsatisfactory results, (2) DISCO-SCA performs well, (3) provided proper pre-processing and algorithmic adaptations, GSVD reaches a performance level similar to that of DISCO-SCA, and (4) DISCO-SCA is directly generalizable to more than two data sources. The biological relevance of DISCO-SCA is illustrated with two applications. First, in a setting of comparative genomics, it is shown that DISCO-SCA recovers a common theme of cell cycle progression and a yeast-specific response to pheromones. The biological annotation was obtained by applying Gene Set Enrichment Analysis in an appropriate way. Second, in an application of DISCO-SCA to metabolomics data for Escherichia coli obtained with two different chemical analysis platforms, it is illustrated that the metabolites involved in some of the biological processes underlying the data are detected by one of the two platforms only; therefore, platforms for microbial metabolomics should be tailored to the biological question. Conclusions Both DISCO-SCA and properly applied GSVD are promising integrative methods for finding common and distinctive processes in multisource data. Open source code for both methods is provided. PMID:22693578

  2. Path lumping: An efficient algorithm to identify metastable path channels for conformational dynamics of multi-body systems

    NASA Astrophysics Data System (ADS)

    Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui

    2017-07-01

    Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.

  3. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  4. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  5. Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes

    NASA Astrophysics Data System (ADS)

    Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew

    2018-03-01

    We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.

  6. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  7. Towards Tunable Consensus Clustering for Studying Functional Brain Connectivity During Affective Processing.

    PubMed

    Liu, Chao; Abu-Jamous, Basel; Brattico, Elvira; Nandi, Asoke K

    2017-03-01

    In the past decades, neuroimaging of humans has gained a position of status within neuroscience, and data-driven approaches and functional connectivity analyses of functional magnetic resonance imaging (fMRI) data are increasingly favored to depict the complex architecture of human brains. However, the reliability of these findings is jeopardized by too many analysis methods and sometimes too few samples used, which leads to discord among researchers. We propose a tunable consensus clustering paradigm that aims at overcoming the clustering methods selection problem as well as reliability issues in neuroimaging by means of first applying several analysis methods (three in this study) on multiple datasets and then integrating the clustering results. To validate the method, we applied it to a complex fMRI experiment involving affective processing of hundreds of music clips. We found that brain structures related to visual, reward, and auditory processing have intrinsic spatial patterns of coherent neuroactivity during affective processing. The comparisons between the results obtained from our method and those from each individual clustering algorithm demonstrate that our paradigm has notable advantages over traditional single clustering algorithms in being able to evidence robust connectivity patterns even with complex neuroimaging data involving a variety of stimuli and affective evaluations of them. The consensus clustering method is implemented in the R package "UNCLES" available on http://cran.r-project.org/web/packages/UNCLES/index.html .

  8. Bidirectional light-scattering image processing method for high-concentration jet sprays

    NASA Astrophysics Data System (ADS)

    Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.

    1985-01-01

    In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.

  9. A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits

    NASA Astrophysics Data System (ADS)

    Moradi, Behzad; Mirzaei, Abdolreza

    2016-11-01

    A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.

  10. Structural redundancy of data from wastewater treatment systems. Determination of individual balance equations.

    PubMed

    Spindler, A

    2014-06-15

    Although data reconciliation is intensely applied in process engineering, almost none of its powerful methods are employed for validation of operational data from wastewater treatment plants. This is partly due to some prerequisites that are difficult to meet including steady state, known variances of process variables and absence of gross errors. However, an algorithm can be derived from the classical approaches to data reconciliation that allows to find a comprehensive set of equations describing redundancy in the data when measured and unmeasured variables (flows and concentrations) are defined. This is a precondition for methods of data validation based on individual mass balances such as CUSUM charts. The procedure can also be applied to verify the necessity of existing or additional measurements with respect to the improvement of the data's redundancy. Results are given for a large wastewater treatment plant. The introduction aims at establishing a link between methods known from data reconciliation in process engineering and their application in wastewater treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Business Models for Training and Performance Improvement Departments

    ERIC Educational Resources Information Center

    Carliner, Saul

    2004-01-01

    Although typically applied to entire enterprises, the concept of business models applies to training and performance improvement groups. Business models are "the method by which firm[s] build and use [their] resources to offer.. value." Business models affect the types of projects, services offered, skills required, business processes, and type of…

  12. Applying Comprehensive Environmental Assessment to Research Planning for Multiwalled Carbon Nanotubes: Refinements to Inform Future Stakeholder Engagement

    EPA Science Inventory

    We previously described our collective judgment methods to engage expert stakeholders in the Comprehensive Environmental Assessment (CEA) workshop process applied to nano-TiO2 and nano-Ag research planning. We identified several lessons learned in engaging stakeholders to identif...

  13. Automated anatomical labeling method for abdominal arteries extracted from 3D abdominal CT images

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Hoang, Bui Huy; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku

    2012-02-01

    This paper presents an automated anatomical labeling method of abdominal arteries. In abdominal surgery, understanding of blood vessel structure concerning with a target organ is very important. Branching pattern of blood vessels differs among individuals. It is required to develop a system that can assist understanding of a blood vessel structure and anatomical names of blood vessels of a patient. Previous anatomical labbeling methods for abdominal arteries deal with either of the upper or lower abdominal arteries. In this paper, we present an automated anatomical labeling method of both of the upper and lower abdominal arteries extracted from CT images. We obtain a tree structure of artery regions and calculate feature values for each branch. These feature values include the diameter, curvature, direction, and running vectors of a branch. Target arteries of this method are grouped based on branching conditions. The following processes are separately applied for each group. We compute candidate artery names by using classifiers that are trained to output artery names. A correction process of the candidate anatomical names based on the rule of majority is applied to determine final names. We applied the proposed method to 23 cases of 3D abdominal CT images. Experimental results showed that the proposed method is able to perform nomenclature of entire major abdominal arteries. The recall and the precision rates of labeling are 79.01% and 80.41%, respectively.

  14. Electrondriven processes in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKoy, Vincent

    2017-03-20

    This project developed and applied scalable computational methods to obtain information about low-energy electron collisions with larger polyatomic molecules. Such collisions are important in modeling radiation damage to living systems, in spark ignition and combustion, and in plasma processing of materials. The focus of the project was to develop efficient methods that could be used to obtain both fundamental scientific insights and data of practical value to applications.

  15. Software sensors for bioprocesses.

    PubMed

    Bogaerts, Ph; Vande Wouwer, A

    2003-10-01

    State estimation is a significant problem in biotechnological processes, due to the general lack of hardware sensor measurements of the variables describing the process dynamics. The objective of this paper is to review a number of software sensor design methods, including extended Kalman filters, receding-horizon observers, asymptotic observers, and hybrid observers, which can be efficiently applied to bioprocesses. These several methods are illustrated with simulation and real-life case studies.

  16. Vacuum electrolysis of quartz

    DOEpatents

    King, James Claude

    1976-01-13

    The disclosure is directed to a method for processing quartz used in fabricating crystal resonators such that transient frequency change of resonators exposed to pulse irradiation is virtually eliminated. The method involves heating the crystal quartz in a hydrogen-free atmosphere while simultaneously applying an electric field in the Z-axis direction of the crystal. The electric field is maintained during the cool-down phase of the process.

  17. Signal processing methods for in-situ creep specimen monitoring

    NASA Astrophysics Data System (ADS)

    Guers, Manton J.; Tittmann, Bernhard R.

    2018-04-01

    Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.

  18. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  19. Estimation of waste water treatment plant methane emissions: methodology and results from a short campaign

    NASA Astrophysics Data System (ADS)

    Yver-Kwok, C. E.; Müller, D.; Caldow, C.; Lebegue, B.; Mønster, J. G.; Rella, C. W.; Scheutz, C.; Schmidt, M.; Ramonet, M.; Warneke, T.; Broquet, G.; Ciais, P.

    2013-10-01

    This paper describes different methods to estimate methane emissions at different scales. These methods are applied to a waste water treatment plant (WWTP) located in Valence, France. We show that Fourier Transform Infrared (FTIR) measurements as well as Cavity Ring Down Spectroscopy (CRDS) can be used to measure emissions from the process to the regional scale. To estimate the total emissions, we investigate a tracer release method (using C2H2) and the Radon tracer method (using 222Rn). For process-scale emissions, both tracer release and chamber techniques were used. We show that the tracer release method is suitable to quantify facility- and some process-scale emissions, while the Radon tracer method encompasses not only the treatment station but also a large area around. Thus the Radon tracer method is more representative of the regional emissions around the city. Uncertainties for each method are described. Applying the methods to CH4 emissions, we find that the main source of emissions of the plant was not identified with certainty during this short campaign, although the primary source of emissions is likely to be from solid sludge. Overall, the waste water treatment plant represents a small part (3%) of the methane emissions of the city of Valence and its surroundings,which is in agreement with the national inventories.

  20. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  1. Intelligent monitoring and control of semiconductor manufacturing equipment

    NASA Technical Reports Server (NTRS)

    Murdock, Janet L.; Hayes-Roth, Barbara

    1991-01-01

    The use of AI methods to monitor and control semiconductor fabrication in a state-of-the-art manufacturing environment called the Rapid Thermal Multiprocessor is described. Semiconductor fabrication involves many complex processing steps with limited opportunities to measure process and product properties. By applying additional process and product knowledge to that limited data, AI methods augment classical control methods by detecting abnormalities and trends, predicting failures, diagnosing, planning corrective action sequences, explaining diagnoses or predictions, and reacting to anomalous conditions that classical control systems typically would not correct. Research methodology and issues are discussed, and two diagnosis scenarios are examined.

  2. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Vezewski, D. J.

    1980-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  3. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1979-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  4. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  5. Certification for civil flight decks and the human-computer interface

    NASA Technical Reports Server (NTRS)

    Mcclumpha, Andrew J.; Rudisill, Marianne

    1994-01-01

    This paper will address the issue of human factor aspects of civil flight deck certification, with emphasis on the pilot's interface with automation. In particular, three questions will be asked that relate to this certification process: (1) are the methods, data, and guidelines available from human factors to adequately address the problems of certifying as safe and error tolerant the complex automated systems of modern civil transport aircraft; (2) do aircraft manufacturers effectively apply human factors information during the aircraft flight deck design process; and (3) do regulatory authorities effectively apply human factors information during the aircraft certification process?

  6. Final Report for Contract N00014-89-J-1967 for the Time Period from 1 May 1989 to 31 December 1990 (Texas Univ. at Austin. Applied Research Labs.)

    DTIC Science & Technology

    1991-04-23

    in this section. In our investigation of higher order processing methods for remote acoustic sensing we sought to understand the principles of laser...magnitude less than those presently detected in laboratory measurements. An initial study of several potential higher order processing techniques was...incoherent. The use of higher order processing methods to provide some level of discrimination against noise thus appears tractable. Finally, the effects

  7. Developments in hydrogenation technology for fine-chemical and pharmaceutical applications.

    PubMed

    Machado, R M; Heier, K R; Broekhuis, R R

    2001-11-01

    The continuous innovation in hydrogenation technology is testimony to its growing importance in the manufacture of specialty and fine chemicals. New developments in equipment, process intensification and catalysis represent major themes that have undergone recent advances. Developments in chiral catalysis, methods to support and fix homogeneous catalysts, novel reactor and mixing technology, high-throughput screening, supercritical processing, spectroscopic and electrochemical online process monitoring, monolithic and structured catalysts, and sonochemical activation methods illustrate the scope and breadth of evolving technology applied to hydrogenation.

  8. Endoscopic ultrasound-guided fine-needle aspiration with liquid-based cytologic preparation in the diagnosis of primary pancreatic lymphoma.

    PubMed

    Rossi, Esther Diana; Larghi, Alberto; Verna, Elizabeth C; Martini, Maurizio; Galasso, Domenico; Carnuccio, Antonella; Larocca, Luigi Maria; Costamagna, Guido; Fadda, Guido

    2010-11-01

    The diagnosis subtyping of lymphoma on specimens collected by endoscopic ultrasound fine-needle aspiration (EUS-FNA) can be extremely difficult. When a cytopathologist is available for the on-site evaluation, the diagnosis may be achieved by applying flow cytometric techniques. We describe our experience with immunocytochemistry (ICC) and molecular biology studies applied on EUS-FNA specimens processed with a liquid-based cytologic (LBC) preparation for the diagnosis of primary pancreatic lymphoma (PPL). Three patients with a pancreatic mass underwent EUS-FNA. The collected specimens were processed with the ThinPrep method for the cytologic diagnosis and eventual additional investigations. A morphologic picture consistent with PPL was found on the LBC specimens of the 3 patients. Subsequent ICC and molecular biology studies for immunoglobulin heavy chain gene rearrangement established the diagnosis of pancreatic large B-cell non-Hodgkin lymphoma in 2 patients and a non-Hodgkin lymphoma with plasmoblastic/immunoblastic differentiation in the remaining one. An LBC preparation can be used to diagnose and subtype PPL by applying ICC and molecular biology techniques to specimens collected with EUS-FNA. This method can be an additional processing method for EUS-FNA specimens in centers where on-site cytopathologist expertise is not available.

  9. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  10. Ranking of options of real estate use by expert assessments mathematical processing

    NASA Astrophysics Data System (ADS)

    Lepikhina, O. Yu; Skachkova, M. E.; Mihaelyan, T. A.

    2018-05-01

    The article is devoted to the development of the real estate assessment concept. In conditions of multivariate using of the real estate method based on calculating, the integral indicator of each variant’s efficiency is proposed. In order to calculate weights of criteria of the efficiency expert method, Analytic hierarchy process and its mathematical support are used. The method allows fulfilling ranking of alternative types of real estate use in dependence of their efficiency. The method was applied for one of the land parcels located on Primorsky district in Saint Petersburg.

  11. Heuristics as a Basis for Assessing Creative Potential: Measures, Methods, and Contingencies

    ERIC Educational Resources Information Center

    Vessey, William B.; Mumford, Michael D.

    2012-01-01

    Studies of creative thinking skills have generally measured a single aspect of creativity, divergent thinking. A number of other processes involved in creative thought have been identified. Effective execution of these processes is held to depend on the strategies applied in process execution, or heuristics. In this article, we review prior…

  12. 3D food printing: a new dimension in food production processes

    USDA-ARS?s Scientific Manuscript database

    3D food printing, also known as food layered manufacture (FLM), is an exciting new method of digital food production that applies the process of additive manufacturing to food fabrication. In the 3D food printing process, a food product is first scanned or designed with computer-aided design softwa...

  13. Rapid and Checkable Electrical Post-Treatment Method for Organic Photovoltaic Devices

    PubMed Central

    Park, Sangheon; Seo, Yu-Seong; Shin, Won Suk; Moon, Sang-Jin; Hwang, Jungseek

    2016-01-01

    Post-treatment processes improve the performance of organic photovoltaic devices by changing the microscopic morphology and configuration of the vertical phase separation in the active layer. Thermal annealing and solvent vapor (or chemical) treatment processes have been extensively used to improve the performance of bulk-heterojunction (BHJ) organic photovoltaic (OPV) devices. In this work we introduce a new post-treatment process which we apply only electrical voltage to the BHJ-OPV devices. We used the commercially available P3HT [Poly(3-hexylthiophene)] and PC61BM (Phenyl-C61-Butyric acid Methyl ester) photovoltaic materials as donor and acceptor, respectively. We monitored the voltage and current applied to the device to check for when the post-treatment process had been completed. This electrical treatment process is simpler and faster than other post-treatment methods, and the performance of the electrically treated solar cell is comparable to that of a reference (thermally annealed) device. Our results indicate that the proposed treatment process can be used efficiently to fabricate high-performance BHJ-OPV devices. PMID:26932767

  14. Metabolomic approach for discrimination of processed ginseng genus (Panax ginseng and Panax quinquefolius) using UPLC-QTOF MS

    PubMed Central

    Park, Hee-Won; In, Gyo; Kim, Jeong-Han; Cho, Byung-Goo; Han, Gyeong-Ho; Chang, Il-Moo

    2013-01-01

    Discriminating between two herbal medicines (Panax ginseng and Panax quinquefolius), with similar chemical and physical properties but different therapeutic effects, is a very serious and difficult problem. Differentiation between two processed ginseng genera is even more difficult because the characteristics of their appearance are very similar. An ultraperformance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOF MS)-based metabolomic technique was applied for the metabolite profiling of 40 processed P. ginseng and processed P. quinquefolius. Currently known biomarkers such as ginsenoside Rf and F11 have been used for the analysis using the UPLC-photodiode array detector. However, this method was not able to fully discriminate between the two processed ginseng genera. Thus, an optimized UPLC-QTOF-based metabolic profiling method was adapted for the analysis and evaluation of two processed ginseng genera. As a result, all known biomarkers were identified by the proposed metabolomics, and additional potential biomarkers were extracted from the huge amounts of global analysis data. Therefore, it is expected that such metabolomics techniques would be widely applied to the ginseng research field. PMID:24558312

  15. Coupling Computer-Aided Process Simulation and ...

    EPA Pesticide Factsheets

    A methodology is described for developing a gate-to-gate life cycle inventory (LCI) of a chemical manufacturing process to support the application of life cycle assessment in the design and regulation of sustainable chemicals. The inventories were derived by first applying process design and simulation of develop a process flow diagram describing the energy and basic material flows of the system. Additional techniques developed by the U.S. Environmental Protection Agency for estimating uncontrolled emissions from chemical processing equipment were then applied to obtain a detailed emission profile for the process. Finally, land use for the process was estimated using a simple sizing model. The methodology was applied to a case study of acetic acid production based on the Cativa tm process. The results reveal improvements in the qualitative LCI for acetic acid production compared to commonly used databases and top-down methodologies. The modeling techniques improve the quantitative LCI results for inputs and uncontrolled emissions. With provisions for applying appropriate emission controls, the proposed method can provide an estimate of the LCI that can be used for subsequent life cycle assessments. As part of its mission, the Agency is tasked with overseeing the use of chemicals in commerce. This can include consideration of a chemical's potential impact on health and safety, resource conservation, clean air and climate change, clean water, and sustainable

  16. Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.

    2014-05-01

    Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.

  17. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  18. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  19. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    PubMed

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  20. Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra

    NASA Astrophysics Data System (ADS)

    Rezakhah, Saeid; Maleki, Yasaman

    2016-07-01

    Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.

  1. Microwave processing heats up

    USDA-ARS?s Scientific Manuscript database

    Microwaves are a common appliance in many households. In the United States microwave heating is the third most popular domestic heating method food foods. Microwave heating is also a commercial food processing technology that has been applied for cooking, drying, and tempering foods. It's use in ...

  2. Weak "A" blood subgroup discrimination by a rheo-optical method: a new application of laser backscattering

    NASA Astrophysics Data System (ADS)

    Rasia, Rodolfo J.; Rasia-Valverde, Juana R.; Stoltz, Jean F.

    1996-01-01

    Laser backscattering is an excellent tool to investigate size and concentration of suspended particles. It was successfully applied to the analysis of erythrocyte aggregation. A method is proposed that applies laser backscattering to the evaluation of the strength of the immunologic erythrocyte agglutination by approaching the energy required for the mechanical dissociation of agglutinates. Mills and Snabre have proposed a theory of laser backscattering for erythrocyte aggregation analysis. It is applied here to analyze the dissociation process of erythrocyte agglutinates performed by imposing a constant shear rate to the agglutinate suspension in a couette viscometer until a dispersion of isolated red cells is attained. Experimental verifications of the method were performed on the erythrocytes of the ABO group reacting against an anti-A test serum in twofold series dilutions. Spent energy is approached by a numerical process carried out on the backscattered intensity data registered during mechanical dissociation. Velocities of agglutination and dissociation lead to the calculation of dissociation parameters These values are used to evaluate the strength of the immunological reaction and to discriminate weak subgroups of ABO system.

  3. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    PubMed

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  4. A survey of methods for the evaluation of tissue engineering scaffold permeability.

    PubMed

    Pennella, F; Cerino, G; Massai, D; Gallo, D; Falvo D'Urso Labate, G; Schiavi, A; Deriu, M A; Audenino, A; Morbiducci, Umberto

    2013-10-01

    The performance of porous scaffolds for tissue engineering (TE) applications is evaluated, in general, in terms of porosity, pore size and distribution, and pore tortuosity. These descriptors are often confounding when they are applied to characterize transport phenomena within porous scaffolds. On the contrary, permeability is a more effective parameter in (1) estimating mass and species transport through the scaffold and (2) describing its topological features, thus allowing a better evaluation of the overall scaffold performance. However, the evaluation of TE scaffold permeability suffers of a lack of uniformity and standards in measurement and testing procedures which makes the comparison of results obtained in different laboratories unfeasible. In this review paper we summarize the most important features influencing TE scaffold permeability, linking them to the theoretical background. An overview of methods applied for TE scaffold permeability evaluation is given, presenting experimental test benches and computational methods applied (1) to integrate experimental measurements and (2) to support the TE scaffold design process. Both experimental and computational limitations in the permeability evaluation process are also discussed.

  5. Methods utilized in evaluating the profitability of commercial space processing

    NASA Technical Reports Server (NTRS)

    Bloom, H. L.; Schmitt, P. T.

    1976-01-01

    Profitability analysis is applied to commercial space processing on the basis of business concept definition and assessment and the relationship between ground and space functions. Throughput analysis is demonstrated by analysis of the space manufacturing of surface acoustic wave devices. The paper describes a financial analysis model for space processing and provides key profitability measures for space processed isoenzymes.

  6. Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.

  7. An Introduction to the BFS Method and Its Use to Model Binary NiAl Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Ferrante, J.; Amador, C.

    1998-01-01

    We introduce the Bozzolo-Ferrante-Smith (BFS) method for alloys as a computationally efficient tool for aiding in the process of alloy design. An intuitive description of the BFS method is provided, followed by a formal discussion of its implementation. The method is applied to the study of the defect structure of NiAl binary alloys. The groundwork is laid for a detailed progression to higher order NiAl-based alloys linking theoretical calculations and computer simulations based on the BFS method and experimental work validating each step of the alloy design process.

  8. An integrated lean-methods approach to hospital facilities redesign.

    PubMed

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  9. Valuing national effects of digital health investments: an applied method.

    PubMed

    Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad

    2015-01-01

    This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.

  10. High magnetic field ohmically decoupled non-contact technology

    DOEpatents

    Wilgen, John [Oak Ridge, TN; Kisner, Roger [Knoxville, TN; Ludtka, Gerard [Oak Ridge, TN; Ludtka, Gail [Oak Ridge, TN; Jaramillo, Roger [Knoxville, TN

    2009-05-19

    Methods and apparatus are described for high magnetic field ohmically decoupled non-contact treatment of conductive materials in a high magnetic field. A method includes applying a high magnetic field to at least a portion of a conductive material; and applying an inductive magnetic field to at least a fraction of the conductive material to induce a surface current within the fraction of the conductive material, the surface current generating a substantially bi-directional force that defines a vibration. The high magnetic field and the inductive magnetic field are substantially confocal, the fraction of the conductive material is located within the portion of the conductive material and ohmic heating from the surface current is ohmically decoupled from the vibration. An apparatus includes a high magnetic field coil defining an applied high magnetic field; an inductive magnetic field coil coupled to the high magnetic field coil, the inductive magnetic field coil defining an applied inductive magnetic field; and a processing zone located within both the applied high magnetic field and the applied inductive magnetic field. The high magnetic field and the inductive magnetic field are substantially confocal, and ohmic heating of a conductive material located in the processing zone is ohmically decoupled from a vibration of the conductive material.

  11. Parallel steady state studies on a milliliter scale accelerate fed-batch bioprocess design for recombinant protein production with Escherichia coli.

    PubMed

    Schmideder, Andreas; Cremer, Johannes H; Weuster-Botz, Dirk

    2016-11-01

    In general, fed-batch processes are applied for recombinant protein production with Escherichia coli (E. coli). However, state of the art methods for identifying suitable reaction conditions suffer from severe drawbacks, i.e. direct transfer of process information from parallel batch studies is often defective and sequential fed-batch studies are time-consuming and cost-intensive. In this study, continuously operated stirred-tank reactors on a milliliter scale were applied to identify suitable reaction conditions for fed-batch processes. Isopropyl β-d-1-thiogalactopyranoside (IPTG) induction strategies were varied in parallel-operated stirred-tank bioreactors to study the effects on the continuous production of the recombinant protein photoactivatable mCherry (PAmCherry) with E. coli. Best-performing induction strategies were transferred from the continuous processes on a milliliter scale to liter scale fed-batch processes. Inducing recombinant protein expression by dynamically increasing the IPTG concentration to 100 µM led to an increase in the product concentration of 21% (8.4 g L -1 ) compared to an implemented high-performance production process with the most frequently applied induction strategy by a single addition of 1000 µM IPGT. Thus, identifying feasible reaction conditions for fed-batch processes in parallel continuous studies on a milliliter scale was shown to be a powerful, novel method to accelerate bioprocess design in a cost-reducing manner. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1426-1435, 2016. © 2016 American Institute of Chemical Engineers.

  12. Enantioselective reductive transformation of climbazole: A concept towards quantitative biodegradation assessment in anaerobic biological treatment processes.

    PubMed

    Brienza, Monica; Chiron, Serge

    2017-06-01

    An efficient chiral method-based using liquid chromatography-high resolution-mass spectrometry analytical method has been validated for the determination of climbazole (CBZ) enantiomers in wastewater and sludge with quantification limits below the 1 ng/L and 2 ng/g range, respectively. On the basis of this newly developed analytical method, the stereochemistry of CBZ was investigated over time in sludge biotic and sterile batch experiments under anoxic dark and light conditions and during wastewater biological treatment by subsurface flow constructed wetlands. CBZ stereoselective degradation was exclusively observed under biotic conditions, confirming the specificity of enantiomeric fraction variations to biodegradation processes. Abiotic CBZ enantiomerization was insignificant at circumneutral pH and CBZ was always biotransformed into CBZ-alcohol due to the specific and enantioselective reduction of the ketone function of CBZ into a secondary alcohol function. This transformation was almost quantitative and biodegradation gave good first order kinetic fit for both enantiomers. The possibility to apply the Rayleigh equation to enantioselective CBZ biodegradation processes was investigated. The results of enantiomeric enrichment allowed for a quantitative assessment of in situ biodegradation processes due to a good fit (R 2  > 0.96) of the anoxic/anaerobic CBZ biodegradation to the Rayleigh dependency in all the biotic microcosms and was also applied in subsurface flow constructed wetlands. This work extended the concept of applying the Rayleigh equation towards quantitative biodegradation assessment of organic contaminants to enantioselective processes operating under anoxic/anaerobic conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Osteochondral integration of multiply incised pure cartilage allograft: repair method of focal chondral defects in a porcine model.

    PubMed

    Bardos, Tamas; Farkas, Boglarka; Mezes, Beata; Vancsodi, Jozsef; Kvell, Krisztian; Czompoly, Tamas; Nemeth, Peter; Bellyei, Arpad; Illes, Tamas

    2009-11-01

    A focal cartilage lesion has limited capacity to heal, and the repair modalities used at present are still unable to provide a universal solution. Pure cartilage graft implantation appears to be a simple option, but it has not been applied widely as cartilage will not reattach easily to the subchondral bone. We used a multiple-incision technique (processed chondrograft) to increase cartilage graft surface. We hypothesized that pure cartilage graft with augmented osteochondral fusion capacity may be used for cartilage repair and we compared this method with other repair techniques. Controlled laboratory study. Full-thickness focal cartilage defects were created on the medial femoral condyle of 9-month-old pigs; defects were repaired using various methods including bone marrow stimulation, autologous chondrocyte implantation, and processed chondrograft. After the repair, at weeks 6 and 24, macroscopic and histologic evaluation was carried out. Compared with other methods, processed chondrograft was found to be similarly effective in cartilage repair. Defects without repair and defects treated with bone marrow stimulation appeared slightly irregular with fibrocartilage filling. Autologous chondrocyte implantation produced hyalinelike cartilage, although its cellular organization was distinguishable from the surrounding articular cartilage. Processed chondrograft demonstrated good osteochondral integration, and the resulting tissue appeared to be hyaline cartilage. The applied cartilage surface processing method allows acceptable osteochondral integration, and the repair tissue appears to have good macroscopic and histologic characteristics. If further studies confirm its efficacy, this technique could be considered for human application in the future.

  14. Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

    PubMed

    Ray, L R; Townsend, J R; Ramasubramanian, A

    2001-01-01

    Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.

  15. Principal process analysis of biological models.

    PubMed

    Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc

    2018-06-14

    Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.

  16. Thermal protection of β-carotene in re-assembled casein micelles during different processing technologies applied in food industry.

    PubMed

    Sáiz-Abajo, María-José; González-Ferrero, Carolina; Moreno-Ruiz, Ana; Romo-Hualde, Ana; González-Navarro, Carlos J

    2013-06-01

    β-Carotene is a carotenoid usually applied in the food industry as a precursor of vitamin A or as a colourant. β-Carotene is a labile compound easily degraded by light, heat and oxygen. Casein micelles were used as nanostructures to encapsulate, stabilise and protect β-carotene from degradation during processing in the food industry. Self-assembly method was applied to re-assemble nanomicelles containing β-carotene. The protective effect of the nanostructures against degradation during the most common industrial treatments (sterilisation, pasteurisation, high hydrostatic pressure and baking) was proven. Casein micelles protected β-carotene from degradation during heat stabilisation, high pressure processing and the processes most commonly used in the food industry including baking. This opens new possibilities for introducing thermolabile ingredients in bakery products. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Method and apparatus for use in making an object

    NASA Technical Reports Server (NTRS)

    Derkacs, Thomas (Inventor); Fetheroff, Charles W. (Inventor); Matay, Istvan M. (Inventor); Toth, Istvan J. (Inventor)

    1982-01-01

    Although the method and apparatus of the present invention can be utilized to apply either a uniform or a nonuniform covering of material over many different workpieces, the apparatus (20) is advantageously utilized to apply a thermal barrier covering (64) to an airfoil (22) which is used in a turbine engine. The airfoil is held by a gripper assembly (86) while a spray gun (24) is effective to apply the covering over the airfoil. When a portion of the covering has been applied, a sensor (28) is utilized to detect the thickness of the covering. A control apparatus (32) compares the thickness of the covering of material which has been applied with the desired thickness and is subsequently effective to regulate the operation of the spray gun to adaptively apply a covering of a desired thickness with an accuracy of at least plus or minus 0.0015 inches (1.5 mils) despite unanticipated process variations.

  18. A new automated assessment method for contrast-detail images by applying support vector machine and its robustness to nonlinear image processing.

    PubMed

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-09-01

    The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  19. Application of Taguchi optimization on the cassava starch wastewater electrocoagulation using batch recycle method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Hermida, L.; Suwardi

    2017-11-01

    Tapioca waste water is very difficult to treat; hence many tapioca factories could not treat it well. One of method which able to overcome this problem is electrodeposition. This process has high performance when it conducted using batch recycle process and use aluminum bipolar electrode. However, the optimum operation conditions are having a significant effect in the tapioca wastewater treatment using bath recycle process. In this research, The Taguchi method was successfully applied to know the optimum condition and the interaction between parameters in electrocoagulation process. The results show that current density, conductivity, electrode distance, and pH have a significant effect on the turbidity removal of cassava starch waste water.

  20. Fault detection of Tennessee Eastman process based on topological features and SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen

    2018-03-01

    Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.

  1. Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift

    NASA Astrophysics Data System (ADS)

    Modarresi, N.; Rezakhah, S.

    The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).

  2. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples

    PubMed Central

    Selker, Harry P.; Leslie, Laurel K.

    2015-01-01

    Abstract There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in‐person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. PMID:26332869

  3. Knowledge Management in Preserving Ecosystems: The Case of Seoul

    ERIC Educational Resources Information Center

    Lee, Jeongseok

    2009-01-01

    This study explores the utility of employing knowledge management as a framework for understanding how public managers perform ecosystem management. It applies the grounded theory method to build a model. The model is generated by applying the concept of knowledge process to an investigation of how the urban ecosystem is publicly managed by civil…

  4. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  5. Applying a foil queue micro-electrode in micro-EDM to fabricate a 3D micro-structure

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Guo, Kang; Wu, Xiao-yu; Lei, Jian-guo; Liang, Xiong; Guo, Deng-ji; Ma, Jiang; Cheng, Rong

    2018-05-01

    Applying a 3D micro-electrode in a micro electrical discharge machining (micro-EDM) can fabricate a 3D micro-structure with an up and down reciprocating method. However, this processing method has some shortcomings, such as a low success rate and a complex process for fabrication of 3D micro-electrodes. By focusing on these shortcomings, this paper proposed a novel 3D micro-EDM process based on the foil queue micro-electrode. Firstly, a 3D micro-electrode was discretized into several foil micro-electrodes and these foil micro-electrodes constituted a foil queue micro-electrode. Then, based on the planned process path, foil micro-electrodes were applied in micro-EDM sequentially and the micro-EDM results of each foil micro-electrode were able to superimpose the 3D micro-structure. However, the step effect will occur on the 3D micro-structure surface, which has an adverse effect on the 3D micro-structure. To tackle this problem, this paper proposes to reduce this adverse effect by rounded corner wear at the end of the foil micro-electrode and studies the impact of machining parameters on rounded corner wear and the step effect on the micro-structure surface. Finally, using a wire cutting voltage of 80 V, a current of 0.5 A and a pulse width modulation ratio of 1:4, the foil queue micro-electrode was fabricated by wire electrical discharge machining. Also, using a pulse width of 100 ns, a pulse interval of 200 ns, a voltage of 100 V and workpiece material of 304# stainless steel, the foil queue micro-electrode was applied in micro-EDM for processing of a 3D micro-structure with hemispherical features, which verified the feasibility of this process.

  6. Automatic Query Formulations in Information Retrieval.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1983-01-01

    Introduces methods designed to reduce role of search intermediaries by generating Boolean search formulations automatically using term frequency considerations from natural language statements provided by system patrons. Experimental results are supplied and methods are described for applying automatic query formulation process in practice.…

  7. Applying quantum principles to psychology

    NASA Astrophysics Data System (ADS)

    Busemeyer, Jerome R.; Wang, Zheng; Khrennikov, Andrei; Basieva, Irina

    2014-12-01

    This article starts out with a detailed example illustrating the utility of applying quantum probability to psychology. Then it describes several alternative mathematical methods for mapping fundamental quantum concepts (such as state preparation, measurement, state evolution) to fundamental psychological concepts (such as stimulus, response, information processing). For state preparation, we consider both pure states and densities with mixtures. For measurement, we consider projective measurements and positive operator valued measurements. The advantages and disadvantages of each method with respect to applications in psychology are discussed.

  8. [Application of the mixed programming with Labview and Matlab in biomedical signal analysis].

    PubMed

    Yu, Lu; Zhang, Yongde; Sha, Xianzheng

    2011-01-01

    This paper introduces the method of mixed programming with Labview and Matlab, and applies this method in a pulse wave pre-processing and feature detecting system. The method has been proved suitable, efficient and accurate, which has provided a new kind of approach for biomedical signal analysis.

  9. Errors in patient specimen collection: application of statistical process control.

    PubMed

    Dzik, Walter Sunny; Beckman, Neil; Selleng, Kathleen; Heddle, Nancy; Szczepiorkowski, Zbigniew; Wendel, Silvano; Murphy, Michael

    2008-10-01

    Errors in the collection and labeling of blood samples for pretransfusion testing increase the risk of transfusion-associated patient morbidity and mortality. Statistical process control (SPC) is a recognized method to monitor the performance of a critical process. An easy-to-use SPC method was tested to determine its feasibility as a tool for monitoring quality in transfusion medicine. SPC control charts were adapted to a spreadsheet presentation. Data tabulating the frequency of mislabeled and miscollected blood samples from 10 hospitals in five countries from 2004 to 2006 were used to demonstrate the method. Control charts were produced to monitor process stability. The participating hospitals found the SPC spreadsheet very suitable to monitor the performance of the sample labeling and collection and applied SPC charts to suit their specific needs. One hospital monitored subcategories of sample error in detail. A large hospital monitored the number of wrong-blood-in-tube (WBIT) events. Four smaller-sized facilities, each following the same policy for sample collection, combined their data on WBIT samples into a single control chart. One hospital used the control chart to monitor the effect of an educational intervention. A simple SPC method is described that can monitor the process of sample collection and labeling in any hospital. SPC could be applied to other critical steps in the transfusion processes as a tool for biovigilance and could be used to develop regional or national performance standards for pretransfusion sample collection. A link is provided to download the spreadsheet for free.

  10. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  11. Applying high resolution remote sensing image and DEM to falling boulder hazard assessment

    NASA Astrophysics Data System (ADS)

    Huang, Changqing; Shi, Wenzhong; Ng, K. C.

    2005-10-01

    Boulder fall hazard assessing generally requires gaining the boulder information. The extensive mapping and surveying fieldwork is a time-consuming, laborious and dangerous conventional method. So this paper proposes an applying image processing technology to extract boulder and assess boulder fall hazard from high resolution remote sensing image. The method can replace the conventional method and extract the boulder information in high accuracy, include boulder size, shape, height and the slope and aspect of its position. With above boulder information, it can be satisfied for assessing, prevention and cure boulder fall hazard.

  12. Nondestructive online testing method for friction stir welding using acoustic emission

    NASA Astrophysics Data System (ADS)

    Levikhina, Anastasiya

    2017-12-01

    The paper reviews the possibility of applying the method of acoustic emission for online monitoring of the friction stir welding process. It is shown that acoustic emission allows the detection of weld defects and their location in real time. The energy of an acoustic signal and the median frequency are suggested to be used as informative parameters. The method of calculating the median frequency with the use of a short time Fourier transform is applied for the identification of correlations between the defective weld structure and properties of the acoustic emission signals received during welding.

  13. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  14. Capillary zone electrophoresis method for a highly glycosylated and sialylated recombinant protein: development, characterization and application for process development.

    PubMed

    Zhang, Le; Lawson, Ken; Yeung, Bernice; Wypych, Jette

    2015-01-06

    A purity method based on capillary zone electrophoresis (CZE) has been developed for the separation of isoforms of a highly glycosylated protein. The separation was found to be driven by the number of sialic acids attached to each isoform. The method has been characterized using orthogonal assays and shown to have excellent specificity, precision and accuracy. We have demonstrated the CZE method is a useful in-process assay to support cell culture and purification development of this glycoprotein. Compared to isoelectric focusing (IEF), the CZE method provides more quantitative results and higher sample throughput with excellent accuracy, qualities that are required for process development. In addition, the CZE method has been applied in the stability testing of purified glycoprotein samples.

  15. RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Chen, Nengcheng; Di, Liping

    2012-10-01

    Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.

  16. The Goddard Profiling Algorithm (GPROF): Description and Current Applications

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea

    2004-01-01

    Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.

  17. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  18. Towards better process understanding: chemometrics and multivariate measurements in manufacturing of solid dosage forms.

    PubMed

    Matero, Sanni; van Den Berg, Frans; Poutiainen, Sami; Rantanen, Jukka; Pajander, Jari

    2013-05-01

    The manufacturing of tablets involves many unit operations that possess multivariate and complex characteristics. The interactions between the material characteristics and process related variation are presently not comprehensively analyzed due to univariate detection methods. As a consequence, current best practice to control a typical process is to not allow process-related factors to vary i.e. lock the production parameters. The problem related to the lack of sufficient process understanding is still there: the variation within process and material properties is an intrinsic feature and cannot be compensated for with constant process parameters. Instead, a more comprehensive approach based on the use of multivariate tools for investigating processes should be applied. In the pharmaceutical field these methods are referred to as Process Analytical Technology (PAT) tools that aim to achieve a thorough understanding and control over the production process. PAT includes the frames for measurement as well as data analyzes and controlling for in-depth understanding, leading to more consistent and safer drug products with less batch rejections. In the optimal situation, by applying these techniques, destructive end-product testing could be avoided. In this paper the most prominent multivariate data analysis measuring tools within tablet manufacturing and basic research on operations are reviewed. Copyright © 2013 Wiley Periodicals, Inc.

  19. Classifying multiple types of hand motions using electrocorticography during intraoperative awake craniotomy and seizure monitoring processes—case studies

    PubMed Central

    Xie, Tao; Zhang, Dingguo; Wu, Zehan; Chen, Liang; Zhu, Xiangyang

    2015-01-01

    In this work, some case studies were conducted to classify several kinds of hand motions from electrocorticography (ECoG) signals during intraoperative awake craniotomy & extraoperative seizure monitoring processes. Four subjects (P1, P2 with intractable epilepsy during seizure monitoring and P3, P4 with brain tumor during awake craniotomy) participated in the experiments. Subjects performed three types of hand motions (Grasp, Thumb-finger motion and Index-finger motion) contralateral to the motor cortex covered with ECoG electrodes. Two methods were used for signal processing. Method I: autoregressive (AR) model with burg method was applied to extract features, and additional waveform length (WL) feature has been considered, finally the linear discriminative analysis (LDA) was used as the classifier. Method II: stationary subspace analysis (SSA) was applied for data preprocessing, and the common spatial pattern (CSP) was used for feature extraction before LDA decoding process. Applying method I, the three-class accuracy of P1~P4 were 90.17, 96.00, 91.77, and 92.95% respectively. For method II, the three-class accuracy of P1~P4 were 72.00, 93.17, 95.22, and 90.36% respectively. This study verified the possibility of decoding multiple hand motion types during an awake craniotomy, which is the first step toward dexterous neuroprosthetic control during surgical implantation, in order to verify the optimal placement of electrodes. The accuracy during awake craniotomy was comparable to results during seizure monitoring. This study also indicated that ECoG was a promising approach for precise identification of eloquent cortex during awake craniotomy, and might form a promising BCI system that could benefit both patients and neurosurgeons. PMID:26483627

  20. Automated Chromium Plating Line for Gun Barrels

    DTIC Science & Technology

    1979-09-01

    consistent pretreatments and bath dwell times. Some of the advantages of automated processing include increased productivity (average of 20^) due to...when automated processing procedures’ are used. The current method of applying chromium electrodeposits to gun tubes is a manual, batch operation...currently practiced with rotary swaged gun tubes would substantially reduce the difficulties in automated processing . RECOMMENDATIONS

  1. Evaluation of the clinical process in a critical care information system using the Lean method: a case study.

    PubMed

    Yusof, Maryati Mohd; Khodambashi, Soudabeh; Mokhtar, Ariffin Marzuki

    2012-12-21

    There are numerous applications for Health Information Systems (HIS) that support specific tasks in the clinical workflow. The Lean method has been used increasingly to optimize clinical workflows, by removing waste and shortening the delivery cycle time. There are a limited number of studies on Lean applications related to HIS. Therefore, we applied the Lean method to evaluate the clinical processes related to HIS, in order to evaluate its efficiency in removing waste and optimizing the process flow. This paper presents the evaluation findings of these clinical processes, with regards to a critical care information system (CCIS), known as IntelliVue Clinical Information Portfolio (ICIP), and recommends solutions to the problems that were identified during the study. We conducted a case study under actual clinical settings, to investigate how the Lean method can be used to improve the clinical process. We used observations, interviews, and document analysis, to achieve our stated goal. We also applied two tools from the Lean methodology, namely the Value Stream Mapping and the A3 problem-solving tools. We used eVSM software to plot the Value Stream Map and A3 reports. We identified a number of problems related to inefficiency and waste in the clinical process, and proposed an improved process model. The case study findings show that the Value Stream Mapping and the A3 reports can be used as tools to identify waste and integrate the process steps more efficiently. We also proposed a standardized and improved clinical process model and suggested an integrated information system that combines database and software applications to reduce waste and data redundancy.

  2. The extractive metallurgy of gold

    NASA Astrophysics Data System (ADS)

    Kongolo, K.; Mwema, M. D.

    1998-12-01

    Mössbauer spectroscopy has been successfully used in investigation of the gold compounds present in ores and the gold species which occur during the process metallurgy of this metal. This paper is a survey of the basic recovery methods and techniques used in extractive metallurgy of gold. Process fundamentals on mineral processing, ore leaching, zinc dust cementation, adsorption on activated carbon, electrowinning and refining are examined. The recovery of gold as a by-product of the copper industry is also described. Alternative processing methods are indicated in order to shed light on new interesting research topics where Mössbauer spectroscopy could be applied.

  3. Some Aspects in Photogrammetry Education at the Department of Geodesy and Cadastre of the VGTU

    NASA Astrophysics Data System (ADS)

    Ruzgienė, Birutė

    2008-03-01

    The education in photogrammetry is very important when applying photogrammetric methods for the terrain mapping purposes, for spatial data modelling, solving engineering tasks, measuring of architectural monuments etc. During the time the traditional photogrammetric technologies have been changing to modern fully digital photogrammetric workflow. The number of potential users of the photogrammetric methods tends to increase, because of high-degree automation in photographs (images) processing. The main subjects in Photogrammetry (particularly in Digital Photogrammetry) educational process are discussed. Different methods and digital systems are demonstrated with the examples of aerial photogrammetry products. The main objective is to search the possibilities for training in the photogrammetric measurements. Special attention is paid to the stereo plotting from aerial photography applying modified for teaching analytical technology. The integration of functionality of Digital Photogrammetric Systems and Digital Image Processing is analysed as well with an intention of extending the application areas and possibilities for usage of modern technologies in urban mapping and land cadastre. The practical presentation of photos geometry restitution is implemented as significant part of the studies. The interactive teaching for main photogrammetric procedures and controlling systems are highly desirable that without any doubt improve the quality of educational process.

  4. A comprehensive strategy in the development of a cyclodextrin-modified microemulsion electrokinetic chromatographic method for the assay of diclofenac and its impurities: Mixture-process variable experiments and quality by design.

    PubMed

    Orlandini, S; Pasquini, B; Caprini, C; Del Bubba, M; Squarcialupi, L; Colotta, V; Furlanetto, S

    2016-09-30

    A comprehensive strategy involving the use of mixture-process variable (MPV) approach and Quality by Design principles has been applied in the development of a capillary electrophoresis method for the simultaneous determination of the anti-inflammatory drug diclofenac and its five related substances. The selected operative mode consisted in microemulsion electrokinetic chromatography with the addition of methyl-β-cyclodextrin. The critical process parameters included both the mixture components (MCs) of the microemulsion and the process variables (PVs). The MPV approach allowed the simultaneous investigation of the effects of MCs and PVs on the critical resolution between diclofenac and its 2-deschloro-2-bromo analogue and on analysis time. MPV experiments were used both in the screening phase and in the Response Surface Methodology, making it possible to draw MCs and PVs contour plots and to find important interactions between MCs and PVs. Robustness testing was carried out by MPV experiments and validation was performed following International Conference on Harmonisation guidelines. The method was applied to a real sample of diclofenac gastro-resistant tablets. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Where do Students Go Wrong in Applying the Scientific Method?

    NASA Astrophysics Data System (ADS)

    Rubbo, Louis; Moore, Christopher

    2015-04-01

    Non-science majors completing a liberal arts degree are frequently required to take a science course. Ideally with the completion of a required science course, liberal arts students should demonstrate an improved capability in the application of the scientific method. In previous work we have demonstrated that this is possible if explicit instruction is spent on the development of scientific reasoning skills. However, even with explicit instruction, students still struggle to apply the scientific process. Counter to our expectations, the difficulty is not isolated to a single issue such as stating a testable hypothesis, designing an experiment, or arriving at a supported conclusion. Instead students appear to struggle with every step in the process. This talk summarizes our work looking at and identifying where students struggle in the application of the scientific method. This material is based upon work supported by the National Science Foundation under Grant No. 1244801.

  6. Teaching group theory using Rubik's cubes

    NASA Astrophysics Data System (ADS)

    Cornock, Claire

    2015-10-01

    Being situated within a course at the applied end of the spectrum of maths degrees, the pure mathematics modules at Sheffield Hallam University have an applied spin. Pure topics are taught through consideration of practical examples such as knots, cryptography and automata. Rubik's cubes are used to teach group theory within a final year pure elective based on physical examples. Abstract concepts, such as subgroups, homomorphisms and equivalence relations are explored with the cubes first. In addition to this, conclusions about the cubes can be made through the consideration of algebraic approaches through a process of discovery. The teaching, learning and assessment methods are explored in this paper, along with the challenges and limitations of the methods. The physical use of Rubik's cubes within the classroom and examination will be presented, along with the use of peer support groups in this process. The students generally respond positively to the teaching methods and the use of the cubes.

  7. LC-MS/MS Identification of Species-Specific Muscle Peptides in Processed Animal Proteins.

    PubMed

    Marchis, Daniela; Altomare, Alessandra; Gili, Marilena; Ostorero, Federica; Khadjavi, Amina; Corona, Cristiano; Ru, Giuseppe; Cappelletti, Benedetta; Gianelli, Silvia; Amadeo, Francesca; Rumio, Cristiano; Carini, Marina; Aldini, Giancarlo; Casalone, Cristina

    2017-12-06

    An innovative analytical strategy has been applied to identify signature peptides able to distinguish among processed animal proteins (PAPs) derived from bovine, pig, fish, and milk products. Proteomics was first used to elucidate the proteome of each source. Starting from the identified proteins and using a funnel based approach, a set of abundant and well characterized peptides with suitable physical-chemical properties (signature peptides) and specific for each source was selected. An on-target LC-ESI-MS/MS method (MRM mode) was set up using standard peptides and was then applied to selectively identify the PAP source and also to distinguish proteins from bovine carcass and milk proteins. We believe that the method described meets the request of the European Commission which has developed a strategy for gradually lifting the "total ban" toward "species to species ban", therefore requiring official methods for species-specific discrimination in feed.

  8. Online low-field NMR spectroscopy for process control of an industrial lithiation reaction-automated data analysis.

    PubMed

    Kern, Simon; Meyer, Klas; Guhl, Svetlana; Gräßer, Patrick; Paul, Andrea; King, Rudibert; Maiwald, Michael

    2018-05-01

    Monitoring specific chemical properties is the key to chemical process control. Today, mainly optical online methods are applied, which require time- and cost-intensive calibration effort. NMR spectroscopy, with its advantage being a direct comparison method without need for calibration, has a high potential for enabling closed-loop process control while exhibiting short set-up times. Compact NMR instruments make NMR spectroscopy accessible in industrial and rough environments for process monitoring and advanced process control strategies. We present a fully automated data analysis approach which is completely based on physically motivated spectral models as first principles information (indirect hard modeling-IHM) and applied it to a given pharmaceutical lithiation reaction in the framework of the European Union's Horizon 2020 project CONSENS. Online low-field NMR (LF NMR) data was analyzed by IHM with low calibration effort, compared to a multivariate PLS-R (partial least squares regression) approach, and both validated using online high-field NMR (HF NMR) spectroscopy. Graphical abstract NMR sensor module for monitoring of the aromatic coupling of 1-fluoro-2-nitrobenzene (FNB) with aniline to 2-nitrodiphenylamine (NDPA) using lithium-bis(trimethylsilyl) amide (Li-HMDS) in continuous operation. Online 43.5 MHz low-field NMR (LF) was compared to 500 MHz high-field NMR spectroscopy (HF) as reference method.

  9. Negotiating a Systems Development Method

    NASA Astrophysics Data System (ADS)

    Karlsson, Fredrik; Hedström, Karin

    Systems development methods (or methods) are often applied in tailored version to fit the actual situation. Method tailoring is in most the existing literature viewed as either (a) a highly rational process with the method engineer as the driver where the project members are passive information providers or (b) an unstructured process where the systems developer makes individual choices, a selection process without any driver. The purpose of this chapter is to illustrate that important design decisions during method tailoring are made by project members through negotiation. The study has been carried out using the perspective of actor-network theory. Our narratives depict method tailoring as more complex than (a) and (b) show the driver role rotates between the project members, and design decisions are based on influences from several project members. However, these design decisions are not consensus decisions.

  10. A comparison of ensemble post-processing approaches that preserve correlation structures

    NASA Astrophysics Data System (ADS)

    Schefzik, Roman; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    Despite the fact that ensemble forecasts address the major sources of uncertainty, they exhibit biases and dispersion errors and therefore are known to improve by calibration or statistical post-processing. For instance the ensemble model output statistics (EMOS) method, also known as non-homogeneous regression approach (Gneiting et al., 2005) is known to strongly improve forecast skill. EMOS is based on fitting and adjusting a parametric probability density function (PDF). However, EMOS and other common post-processing approaches apply to a single weather quantity at a single location for a single look-ahead time. They are therefore unable of taking into account spatial, inter-variable and temporal dependence structures. Recently many research efforts have been invested in designing post-processing methods that resolve this drawback but also in verification methods that enable the detection of dependence structures. New verification methods are applied on two classes of post-processing methods, both generating physically coherent ensembles. A first class uses the ensemble copula coupling (ECC) that starts from EMOS but adjusts the rank structure (Schefzik et al., 2013). The second class is a member-by-member post-processing (MBM) approach that maps each raw ensemble member to a corrected one (Van Schaeybroeck and Vannitsem, 2015). We compare variants of the EMOS-ECC and MBM classes and highlight a specific theoretical connection between them. All post-processing variants are applied in the context of the ensemble system of the European Centre of Weather Forecasts (ECMWF) and compared using multivariate verification tools including the energy score, the variogram score (Scheuerer and Hamill, 2015) and the band depth rank histogram (Thorarinsdottir et al., 2015). Gneiting, Raftery, Westveld, and Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., {133}, 1098-1118. Scheuerer and Hamill, 2015. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Wea. Rev. {143},1321-1334. Schefzik, Thorarinsdottir, Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. Statistical Science {28},616-640, 2013. Thorarinsdottir, M. Scheuerer, and C. Heinz, 2015. Assessing the calibration of high-dimensional ensemble forecasts using rank histograms, arXiv:1310.0236. Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  11. A framework of knowledge creation processes in participatory simulation of hospital work systems.

    PubMed

    Andersen, Simone Nyholm; Broberg, Ole

    2017-04-01

    Participatory simulation (PS) is a method to involve workers in simulating and designing their own future work system. Existing PS studies have focused on analysing the outcome, and minimal attention has been devoted to the process of creating this outcome. In order to study this process, we suggest applying a knowledge creation perspective. The aim of this study was to develop a framework describing the process of how ergonomics knowledge is created in PS. Video recordings from three projects applying PS of hospital work systems constituted the foundation of process mining analysis. The analysis resulted in a framework revealing the sources of ergonomics knowledge creation as sequential relationships between the activities of simulation participants sharing work experiences; experimenting with scenarios; and reflecting on ergonomics consequences. We argue that this framework reveals the hidden steps of PS that are essential when planning and facilitating PS that aims at designing work systems. Practitioner Summary: When facilitating participatory simulation (PS) in work system design, achieving an understanding of the PS process is essential. By applying a knowledge creation perspective and process mining, we investigated the knowledge-creating activities constituting the PS process. The analysis resulted in a framework of the knowledge-creating process in PS.

  12. Improved compression molding process

    NASA Technical Reports Server (NTRS)

    Heier, W. C.

    1967-01-01

    Modified compression molding process produces plastic molding compounds that are strong, homogeneous, free of residual stresses, and have improved ablative characteristics. The conventional method is modified by applying a vacuum to the mold during the molding cycle, using a volatile sink, and exercising precise control of the mold closure limits.

  13. Leak Detection by Acoustic Emission Monitoring. Phase 1. Feasibility Study

    DTIC Science & Technology

    1994-05-26

    various signal processing and noise descrimInation techniques during the Data Processing task. C. TEST DESCRIPTION 1. Laboratory Tests Following normal...success in applying these methods to descriminating between the AE bursts generated by two close AE sources In a section of an aircraft structure

  14. Rapid method for controlling the correct labeling of products containing common octopus (Octopus vulgaris) and main substitute species (Eledone cirrhosa and Dosidicus gigas) by fast real-time PCR.

    PubMed

    Espiñeira, Montserrat; Vieites, Juan M

    2012-12-15

    The TaqMan real-time PCR has the highest potential for automation, therefore representing the currently most suitable method for screening, allowing the detection of fraudulent or unintentional mislabeling of species. This work describes the development of a real-time polymerase chain reaction (RT-PCR) system for the detection and identification of common octopus (Octopus vulgaris) and main substitute species (Eledone cirrhosa and Dosidicus gigas). This technique is notable for the combination of simplicity, speed, sensitivity and specificity in an homogeneous assay. The method can be applied to all kinds of products; fresh, frozen and processed, including those undergoing intensive processes of transformation. This methodology was validated to check how the degree of food processing affects the method and the detection of each species. Moreover, it was applied to 34 commercial samples to evaluate the labeling of products made from them. The methodology herein developed is useful to check the fulfillment of labeling regulations for seafood products and to verify traceability in commercial trade and for fisheries control. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Application of Quality by Design Approach to Bioanalysis: Development of a Method for Elvitegravir Quantification in Human Plasma.

    PubMed

    Baldelli, Sara; Marrubini, Giorgio; Cattaneo, Dario; Clementi, Emilio; Cerea, Matteo

    2017-10-01

    The application of Quality by Design (QbD) principles in clinical laboratories can help to develop an analytical method through a systematic approach, providing a significant advance over the traditional heuristic and empirical methodology. In this work, we applied for the first time the QbD concept in the development of a method for drug quantification in human plasma using elvitegravir as the test molecule. The goal of the study was to develop a fast and inexpensive quantification method, with precision and accuracy as requested by the European Medicines Agency guidelines on bioanalytical method validation. The method was divided into operative units, and for each unit critical variables affecting the results were identified. A risk analysis was performed to select critical process parameters that should be introduced in the design of experiments (DoEs). Different DoEs were used depending on the phase of advancement of the study. Protein precipitation and high-performance liquid chromatography-tandem mass spectrometry were selected as the techniques to be investigated. For every operative unit (sample preparation, chromatographic conditions, and detector settings), a model based on factors affecting the responses was developed and optimized. The obtained method was validated and clinically applied with success. To the best of our knowledge, this is the first investigation thoroughly addressing the application of QbD to the analysis of a drug in a biological matrix applied in a clinical laboratory. The extensive optimization process generated a robust method compliant with its intended use. The performance of the method is continuously monitored using control charts.

  16. How Qualitative Methods Can be Used to Inform Model Development.

    PubMed

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  17. Method of removing the effects of electrical shorts and shunts created during the fabrication process of a solar cell

    DOEpatents

    Nostrand, Gerald E.; Hanak, Joseph J.

    1979-01-01

    A method of removing the effects of electrical shorts and shunts created during the fabrication process and improving the performance of a solar cell with a thick film cermet electrode opposite to the incident surface by applying a reverse bias voltage of sufficient magnitude to burn out the electrical shorts and shunts but less than the break down voltage of the solar cell.

  18. An applied study using systems engineering methods to prioritize green systems options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sonya M; Macdonald, John M

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective intomore » how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.« less

  19. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  20. Recovery and purification process development for monoclonal antibody production

    PubMed Central

    Ma, Junfen; Winter, Charles; Bayer, Robert

    2010-01-01

    Hundreds of therapeutic monoclonal antibodies (mAbs) are currently in development, and many companies have multiple antibodies in their pipelines. Current methodology used in recovery processes for these molecules are reviewed here. Basic unit operations such as harvest, Protein A affinity chromatography and additional polishing steps are surveyed. Alternative processes such as flocculation, precipitation and membrane chromatography are discussed. We also cover platform approaches to purification methods development, use of high throughput screening methods, and offer a view on future developments in purification methodology as applied to mAbs. PMID:20647768

  1. Numerical simulation of the control of the three-dimensional transition process in boundary layers

    NASA Technical Reports Server (NTRS)

    Kral, L. D.; Fasel, H. F.

    1990-01-01

    Surface heating techniques to control the three-dimensional laminar-turbulent transition process are numerically investigated for a water boundary layer. The Navier-Stokes and energy equations are solved using a fully implicit finite difference/spectral method. The spatially evolving boundary layer is simulated. Results of both passive and active methods of control are shown for small amplitude two-dimensional and three-dimensional disturbance waves. Control is also applied to the early stages of the secondary instability process using passive or active control techniques.

  2. Estimation of adsorption isotherm and mass transfer parameters in protein chromatography using artificial neural networks.

    PubMed

    Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen

    2017-03-03

    Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  3. Parallel plan execution with self-processing networks

    NASA Technical Reports Server (NTRS)

    Dautrechy, C. Lynne; Reggia, James A.

    1989-01-01

    A critical issue for space operations is how to develop and apply advanced automation techniques to reduce the cost and complexity of working in space. In this context, it is important to examine how recent advances in self-processing networks can be applied for planning and scheduling tasks. For this reason, the feasibility of applying self-processing network models to a variety of planning and control problems relevant to spacecraft activities is being explored. Goals are to demonstrate that self-processing methods are applicable to these problems, and that MIRRORS/II, a general purpose software environment for implementing self-processing models, is sufficiently robust to support development of a wide range of application prototypes. Using MIRRORS/II and marker passing modelling techniques, a model of the execution of a Spaceworld plan was implemented. This is a simplified model of the Voyager spacecraft which photographed Jupiter, Saturn, and their satellites. It is shown that plan execution, a task usually solved using traditional artificial intelligence (AI) techniques, can be accomplished using a self-processing network. The fact that self-processing networks were applied to other space-related tasks, in addition to the one discussed here, demonstrates the general applicability of this approach to planning and control problems relevant to spacecraft activities. It is also demonstrated that MIRRORS/II is a powerful environment for the development and evaluation of self-processing systems.

  4. Light emitting diode with high aspect ratio submicron roughness for light extraction and methods of forming

    DOEpatents

    Li, Ting [Ventura, CA

    2011-04-26

    The surface morphology of an LED light emitting surface is changed by applying a reactive ion etch (RIE) process to the light emitting surface. High aspect ratio, submicron roughness is formed on the light emitting surface by transferring a thin film metal hard-mask having submicron patterns to the surface prior to applying a reactive ion etch process. The submicron patterns in the metal hard-mask can be formed using a low cost, commercially available nano-patterned template which is transferred to the surface with the mask. After subsequently binding the mask to the surface, the template is removed and the RIE process is applied for time duration sufficient to change the morphology of the surface. The modified surface contains non-symmetric, submicron structures having high aspect ratio which increase the efficiency of the device.

  5. Applied vs Basic Research: On Maintaining Your Balance with a Foot in Each Camp.

    ERIC Educational Resources Information Center

    Martin, David W.

    The paper discusses a number of issues concerning the practical usefulness of cognitive psychology research, and presents a case study of pilot training methods to illustrate a model of research processes that produces outcomes which contribute to both basic and applied research goals. Research studies are described as varying in the degree to…

  6. ProcessGene-Connect: SOA Integration between Business Process Models and Enactment Transactions of Enterprise Software Systems

    NASA Astrophysics Data System (ADS)

    Wasser, Avi; Lincoln, Maya

    In recent years, both practitioners and applied researchers have become increasingly interested in methods for integrating business process models and enterprise software systems through the deployment of enabling middleware. Integrative BPM research has been mainly focusing on the conversion of workflow notations into enacted application procedures, and less effort has been invested in enhancing the connectivity between design level, non-workflow business process models and related enactment systems such as: ERP, SCM and CRM. This type of integration is useful at several stages of an IT system lifecycle, from design and implementation through change management, upgrades and rollout. The paper presents an integration method that utilizes SOA for connecting business process models with corresponding enterprise software systems. The method is then demonstrated through an Oracle E-Business Suite procurement process and its ERP transactions.

  7. Proposal of Heuristic Algorithm for Scheduling of Print Process in Auto Parts Supplier

    NASA Astrophysics Data System (ADS)

    Matsumoto, Shimpei; Okuhara, Koji; Ueno, Nobuyuki; Ishii, Hiroaki

    We are interested in the print process on the manufacturing processes of auto parts supplier as an actual problem. The purpose of this research is to apply our scheduling technique developed in university to the actual print process in mass customization environment. Rationalization of the print process is depending on the lot sizing. The manufacturing lead time of the print process is long, and in the present method, production is done depending on worker’s experience and intuition. The construction of an efficient production system is urgent problem. Therefore, in this paper, in order to shorten the entire manufacturing lead time and to reduce the stock, we reexamine the usual method of the lot sizing rule based on heuristic technique, and we propose the improvement method which can plan a more efficient schedule.

  8. Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2002-01-01

    A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.

  9. Prediction in Health Domain Using Bayesian Networks Optimization Based on Induction Learning Techniques

    NASA Astrophysics Data System (ADS)

    Felgaer, Pablo; Britos, Paola; García-Martínez, Ramón

    A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.

  10. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  11. Business Process Improvement Applied to Written Temporary Duty Travel Orders within the United States Air Force

    DTIC Science & Technology

    1993-12-01

    Generally Accepted Process While neither DoD Directives nor USAF Regulations specify exact mandatory TDY order processing methods, most USAF units...functional input. Finally, TDY order processing functional experts at Hanscom, Los Angeles and McClellan AFBs provided inputs based on their experiences...current electronic auditing capabilities. 81 DTPS Initiative. This DFAS-initiated action to standardize TDY order processing throughout DoD is currently

  12. Iodine retention during evaporative volume reduction

    DOEpatents

    Godbee, H.W.; Cathers, G.I.; Blanco, R.E.

    1975-11-18

    An improved method for retaining radioactive iodine in aqueous waste solutions during volume reduction is disclosed. The method applies to evaporative volume reduction processes whereby the decontaminated (evaporated) water can be returned safely to the environment. The method generally comprises isotopically diluting the waste solution with a nonradioactive iodide and maintaining the solution at a high pH during evaporation.

  13. Design and Implementation of Real-Time Vehicular Camera for Driver Assistance and Traffic Congestion Estimation

    PubMed Central

    Son, Sanghyun; Baek, Yunju

    2015-01-01

    As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%. PMID:26295230

  14. Design and Implementation of Real-Time Vehicular Camera for Driver Assistance and Traffic Congestion Estimation.

    PubMed

    Son, Sanghyun; Baek, Yunju

    2015-08-18

    As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.

  15. Empirical Force Fields for Mechanistic Studies of Chemical Reactions in Proteins.

    PubMed

    Das, A K; Meuwly, M

    2016-01-01

    Following chemical reactions in atomistic detail is one of the most challenging aspects of current computational approaches to chemistry. In this chapter the application of adiabatic reactive MD (ARMD) and its multistate version (MS-ARMD) are discussed. Both methods allow to study bond-breaking and bond-forming processes in chemical and biological processes. Particular emphasis is put on practical aspects for applying the methods to investigate the dynamics of chemical reactions. The chapter closes with an outlook of possible generalizations of the methods discussed. © 2016 Elsevier Inc. All rights reserved.

  16. System approach to modeling of industrial technologies

    NASA Astrophysics Data System (ADS)

    Toropov, V. S.; Toropov, E. S.

    2018-03-01

    The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions.

  17. Fuzzy neural network methodology applied to medical diagnosis

    NASA Technical Reports Server (NTRS)

    Gorzalczany, Marian B.; Deutsch-Mcleish, Mary

    1992-01-01

    This paper presents a technique for building expert systems that combines the fuzzy-set approach with artificial neural network structures. This technique can effectively deal with two types of medical knowledge: a nonfuzzy one and a fuzzy one which usually contributes to the process of medical diagnosis. Nonfuzzy numerical data is obtained from medical tests. Fuzzy linguistic rules describing the diagnosis process are provided by a human expert. The proposed method has been successfully applied in veterinary medicine as a support system in the diagnosis of canine liver diseases.

  18. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  19. The Finnish healthcare services lean management.

    PubMed

    Hihnala, Susanna; Kettunen, Lilja; Suhonen, Marjo; Tiirinki, Hanna

    2018-02-05

    Purpose The purpose of this paper is to discuss health services managers' experiences of management in a special health-care unit and development efforts from the point of view of the Lean method. Additionally, the aim is to deepen the knowledge of the managers' work and nature of the Lean method development processes in the workplace. The research focuses on those aspects and results of Lean method that are currently being used in health-care environments. Design/methodology/approach These data were collected through a number of thematic interviews. The participants were nurse managers ( n = 7) and medical managers ( n = 7) who applied Lean management in their work at the University Hospital in the Northern Ostrobothnia Health Care District. The data were analysed with a qualitative content analysis. Findings A common set of values in specialized health-care services, development of activities and challenges for management in the use of the Lean manager development model to improve personal management skills. Practical implications Managers in specialized health-care services can develop and systematically manage with the help of the Lean method. This emphasizes assumptions, from the point of view of management, about systems development when the organization uses the Lean method. The research outcomes originate from specialized health-care settings in Finland in which the Lean method and its associated management principles have been implemented and applied to the delivery of health care. Originality/value The study shows that the research results and in-depth knowledge on Lean method principles can be applied to health-care management and development processes. The research also describes health services managers' experiences of using the Lean method. In the future, these results can be used to improve Lean management skills, identify personal professional competencies and develop skills required in development processes. Also, the research findings can be used in the training of health services managers in the health-care industry worldwide and to help them survive the pressure to change repeatedly.

  20. 2-D and 3-D Difraction Stake Migration Method Using GPR: A Case Study in Canakkale (Turkey)

    NASA Astrophysics Data System (ADS)

    Çaǧlar Yalçiner, Cahit

    In this study, ground-penetrating radar (GPR) method was applied for Clandestine cemetery detection in Ηanakkale (Dardanelles), west Turkey. Investigated area was a historical area which was used as tent hospitals during the World War I. The study area was also used to bury soldiers who died during the treatment process in tent hospitals. Because of agricultural activity grave stones were used by local people, thus, most of the graves were lost in the field. 45 GPR profiles were applied with a GPR system (RAMAC) equipped with 250 MHz central frequency shielded antenna. After main processing steps on raw data, migration was applied to improve section resolution and develop the realism of the subsurface images. Although the GPR in results before migration the anomalous zones are visible, after migration the results became much more visible both in the profiles and 3D illustrations, thus, migrated GPR data were preferred to locate the buried martyrdoms.

  1. Mapping Hydrothermal Alterations in the Muteh Gold Mining Area in Iran by using ASTER satellite Imagery data

    NASA Astrophysics Data System (ADS)

    Asadi Haroni, Hooshang; Hassan Tabatabaei, Seyed

    2016-04-01

    Muteh gold mining area is located in 160 km NW of Isfahan town. Gold mineralization is meso-thermal type and associated with silisic, seresitic and carbonate alterations as well as with hematite and goethite. Image processing and interpretation were applied on the ASTER satellite imagery data of about 400 km2 at the Muteh gold mining area to identify hydrothermal alterations and iron oxides associated with gold mineralization. After applying preprocessing methods such as radiometric and geometric corrections, image processing methods of Principal Components Analysis (PCA), Least Square Fit (Ls-Fit) and Spectral Angle Mapper (SAM) were applied on the ASTER data to identify hydrothermal alterations and iron oxides. In this research reference spectra of minerals such as chlorite, hematite, clay minerals and phengite identified from laboratory spectral analysis of collected samples were used to map the hydrothermal alterations. Finally, identified hydrothermal alteration and iron oxides were validated by visiting and sampling some of the mapped hydrothermal alterations.

  2. A Behavior-Analytic Conceptualization of the Side Effects of Psychotropic Medication

    ERIC Educational Resources Information Center

    Valdovinos, Maria G.; Kennedy, Craig H.

    2004-01-01

    A range of behavior--much deemed problematic by society--is treated with behavioral methods or psychotropic medications. Although the processes associated with behavioral interventions have been investigated using conceptual, experimental, and applied analyses, less is known about the behavioral processes associated with the use of psychotropic…

  3. Method and system for enabling real-time speckle processing using hardware platforms

    NASA Technical Reports Server (NTRS)

    Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)

    2012-01-01

    An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.

  4. Analysis of full disc Ca II K spectroheliograms. I. Photometric calibration and centre-to-limb variation compensation

    NASA Astrophysics Data System (ADS)

    Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.

    2018-01-01

    Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.

  5. Monitoring in real-time focal adhesion protein dynamics in response to a discrete mechanical stimulus

    NASA Astrophysics Data System (ADS)

    von Bilderling, Catalina; Caldarola, Martín; Masip, Martín E.; Bragas, Andrea V.; Pietrasanta, Lía I.

    2017-01-01

    The adhesion of cells to the extracellular matrix is a hierarchical, force-dependent, multistage process that evolves at several temporal scales. An understanding of this complex process requires a precise measurement of forces and its correlation with protein responses in living cells. We present a method to quantitatively assess live cell responses to a local and specific mechanical stimulus. Our approach combines atomic force microscopy with fluorescence imaging. Using this approach, we evaluated the recruitment of adhesion proteins such as vinculin, focal adhesion kinase, paxillin, and zyxin triggered by applying forces in the nN regime to live cells. We observed in real time the development of nascent adhesion sites, evident from the accumulation of early adhesion proteins at the position where the force was applied. We show that the method can be used to quantify the recruitment characteristic times for adhesion proteins in the formation of focal complexes. We also found a spatial remodeling of the mature focal adhesion protein zyxin as a function of the applied force. Our approach allows the study of a variety of complex biological processes involved in cellular mechanotransduction.

  6. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    PubMed

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.

  7. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research

    PubMed Central

    Bamberger, Katharine T.

    2015-01-01

    The use of intensive longitudinal methods (ILM)—rapid in situ assessment at micro timescales—can be overlaid on RCTs and other study designs in applied family research. Especially when done as part of a multiple timescale design—in bursts over macro timescales, ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based (rather than family-based) interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM. PMID:26541560

  8. Monitoring in real-time focal adhesion protein dynamics in response to a discrete mechanical stimulus.

    PubMed

    von Bilderling, Catalina; Caldarola, Martín; Masip, Martín E; Bragas, Andrea V; Pietrasanta, Lía I

    2017-01-01

    The adhesion of cells to the extracellular matrix is a hierarchical, force-dependent, multistage process that evolves at several temporal scales. An understanding of this complex process requires a precise measurement of forces and its correlation with protein responses in living cells. We present a method to quantitatively assess live cell responses to a local and specific mechanical stimulus. Our approach combines atomic force microscopy with fluorescence imaging. Using this approach, we evaluated the recruitment of adhesion proteins such as vinculin, focal adhesion kinase, paxillin, and zyxin triggered by applying forces in the nN regime to live cells. We observed in real time the development of nascent adhesion sites, evident from the accumulation of early adhesion proteins at the position where the force was applied. We show that the method can be used to quantify the recruitment characteristic times for adhesion proteins in the formation of focal complexes. We also found a spatial remodeling of the mature focal adhesion protein zyxin as a function of the applied force. Our approach allows the study of a variety of complex biological processes involved in cellular mechanotransduction.

  9. Processable Aromatic Polyimide Thermoplastic Blends

    NASA Technical Reports Server (NTRS)

    Baucom, Robert M; Johnston, Norman J.; St. Clair, Terry L.; Nelson, James B.; Gleason, John R.; Proctor, K. Mason

    1988-01-01

    Method developed for preparing readily-processable thermoplastic polyimides by blending linear, high-molecular-weight, polyimic acid solutions in ether solvents with ultrafine, semicrystalline, thermoplastic polyimide powders. Slurries formed used to make prepregs. Consolidation of prepregs into finsihed composites characterized by excellent melt flow during processing. Applied to film, fiber, fabric, metal, polymer, or composite surfaces. Used to make various stable slurries from which prepregs prepared.

  10. Implementing Quality Criteria in Designing and Conducting a Sequential Quan [right arrow] Qual Mixed Methods Study of Student Engagement with Learning Applied Research Methods Online

    ERIC Educational Resources Information Center

    Ivankova, Nataliya V.

    2014-01-01

    In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…

  11. A Mixed Prioritization Operators Strategy Using A Single Measurement Criterion For AHP Application Development

    NASA Astrophysics Data System (ADS)

    Yuen, Kevin Kam Fung

    2009-10-01

    The most appropriate prioritization method is still one of the unsettled issues of the Analytic Hierarchy Process, although many studies have been made and applied. Interestingly, many AHP applications apply only Saaty's Eigenvector method as many studies have found that this method may produce rank reversals and have proposed various prioritization methods as alternatives. Some methods have been proved to be better than the Eigenvector method. However, these methods seem not to attract the attention of researchers. In this paper, eight important prioritization methods are reviewed. A Mixed Prioritization Operators Strategy (MPOS) is developed to select a vector which is prioritized by the most appropriate prioritization operator. To verify this new method, a case study of high school selection is revised using the proposed method. The contribution is that MPOS is useful for solving prioritization problems in the AHP.

  12. Field deployable processing methods for stay-in-place ultrasonic transducers

    NASA Astrophysics Data System (ADS)

    Malarich, Nathan; Lissenden, Cliff J.; Tittmann, Bernhard R.

    2018-04-01

    Condition monitoring provides key data for managing the operation and maintenance of mechanical equipment in the power generation, chemical processing, and manufacturing industries. Ultrasonic transducers provide active monitoring capabilities by wall thickness measurements, elastic property determination, crack detection, and other means. In many cases the components operate in harsh environments that may include high temperature, radiation, and hazardous chemicals. Thus, it is desirable to have permanently affixed ultrasonic transducers for condition monitoring in harsh environments. Spray-on transducers provide direct coupling between the active element and the substrate, and can be applied to curved surfaces. We describe a deposition methodology for ultrasonic transducers that can be applied in the field. First, piezoceramic powders mixed into a sol-gel are air-spray deposited onto the substrate. Powder constituents are selected based on the service environment in which the condition monitoring will be performed. Then the deposited coating is pyrolyzed and partially densified using an induction heating system with a custom work coil designed to match the substrate geometry. The next step, applying the electrodes, is more challenging than might be expected because of the porosity of the piezoelectric coating and the potential reactivity of elements in the adjacent layers. After connecting lead wires to the electrodes the transducer is poled and a protective coating can be applied prior to use. Processing of a PZT-bismuth titanate transducer on a large steel substrate is described along with alternate methods.

  13. A review of recent developments in parametric based acoustic emission techniques applied to concrete structures

    NASA Astrophysics Data System (ADS)

    Vidya Sagar, R.; Raghu Prasad, B. K.

    2012-03-01

    This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.

  14. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  15. Realizing drug repositioning by adapting a recommendation system to handle the process.

    PubMed

    Ozsoy, Makbule Guclin; Özyer, Tansel; Polat, Faruk; Alhajj, Reda

    2018-04-12

    Drug repositioning is the process of identifying new targets for known drugs. It can be used to overcome problems associated with traditional drug discovery by adapting existing drugs to treat new discovered diseases. Thus, it may reduce associated risk, cost and time required to identify and verify new drugs. Nowadays, drug repositioning has received more attention from industry and academia. To tackle this problem, researchers have applied many different computational methods and have used various features of drugs and diseases. In this study, we contribute to the ongoing research efforts by combining multiple features, namely chemical structures, protein interactions and side-effects to predict new indications of target drugs. To achieve our target, we realize drug repositioning as a recommendation process and this leads to a new perspective in tackling the problem. The utilized recommendation method is based on Pareto dominance and collaborative filtering. It can also integrate multiple data-sources and multiple features. For the computation part, we applied several settings and we compared their performance. Evaluation results show that the proposed method can achieve more concentrated predictions with high precision, where nearly half of the predictions are true. Compared to other state of the art methods described in the literature, the proposed method is better at making right predictions by having higher precision. The reported results demonstrate the applicability and effectiveness of recommendation methods for drug repositioning.

  16. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  17. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  18. Green perspective in food industry production line design: A review

    NASA Astrophysics Data System (ADS)

    Xian, C. Y.; Sin, T. C.; Liyana, M. R. N.; Awang, A.; Fathullah, M.

    2017-09-01

    The design of green manufacturing process in food industries is currently a hot research topic in the multidisciplinary area of applied chemistry, biology and technology. Several process such as freezing, cutting, drying, tempering, bleaching, sterilization, extraction and filtering have been applied efficiency in the food industry. Due to the rapid development of food and peripheral technology, the use of new physical processing or auxiliary processing methods can maintain food inherent nutrients, texture, color, and freshness and also reduce environmental pollution and energy consumption in food processing. Hence, this review paper will study and summarize the effects of green manufacturing process in food industries in term of waste reduction, materials and sustainability manufacturing. In any case, All the food processing equipment must comply with strict standards and regulation, this action will ensure the securing the food quality and safety of food products to consumers.

  19. A note on evaluating VAN earthquake predictions

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-Akis; Melis, Nicos S.

    The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

  20. A comparison between EEG source localization and fMRI during the processing of emotional visual stimuli

    NASA Astrophysics Data System (ADS)

    Hu, Jin; Tian, Jie; Pan, Xiaohong; Liu, Jiangang

    2007-03-01

    The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108 pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy, right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some extent, while part of mismatch in localization between the two methods was also observed. In the future we should apply the method for simultaneous recording of EEG and fMRI to our study.

  1. Kinetics analysis and quantitative calculations for the successive radioactive decay process

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiping; Yan, Deyue; Zhao, Yuliang; Chai, Zhifang

    2015-01-01

    The general radioactive decay kinetics equations with branching were developed and the analytical solutions were derived by Laplace transform method. The time dependence of all the nuclide concentrations can be easily obtained by applying the equations to any known radioactive decay series. Taking the example of thorium radioactive decay series, the concentration evolution over time of various nuclide members in the family has been given by the quantitative numerical calculations with a computer. The method can be applied to the quantitative prediction and analysis for the daughter nuclides in the successive decay with branching of the complicated radioactive processes, such as the natural radioactive decay series, nuclear reactor, nuclear waste disposal, nuclear spallation, synthesis and identification of superheavy nuclides, radioactive ion beam physics and chemistry, etc.

  2. Application of Nursing Process and Its Affecting Factors among Nurses Working in Mekelle Zone Hospitals, Northern Ethiopia

    PubMed Central

    Hagos, Fisseha; Alemseged, Fessehaye; Balcha, Fikadu; Berhe, Semarya; Aregay, Alemseged

    2014-01-01

    Background. Nursing process is considered as appropriate method to explain the nursing essence, its scientific bases, technologies and humanist assumptions that encourage critical thinking and creativity, and permits solving problems in professional practice. Objective. To assess the application of nursing process and it's affecting factors in Mekelle Zone Hospitals. Methods. A cross sectional design employing quantitative and qualitative methods was conducted in Mekelle zone hospitals March 2011. Qualitative data was collected from14 head nurses of six hospitals and quantitative was collected from 200 nurses selected by simple random sampling technique from the six hospitals proportional to their size. SPSS version 16.1 and thematic analysis was used for quantitative and qualitative data respectively. Results. Majority 180 (90%) of the respondents have poor knowledge and 99.5% of the respondents have a positive attitude towards the nursing process. All of the respondents said that they did not use the nursing process during provision of care to their patients at the time of the study. Majority (75%) of the respondent said that the nurse to patient ratio was not optimal to apply the nursing process. Conclusion and Recommendation. The nursing process is not yet applied in all of the six hospitals. The finding revealed that the knowledge of nurses on the nursing process is not adequate to put it in to practice and high patient nurse ratio affects its application. The studied hospitals should consider the application of the nursing process critically by motivating nurses and monitor and evaluate its progress. PMID:24649360

  3. MZmine 2: Modular framework for processing, visualizing, and analyzing mass spectrometry-based molecular profile data

    PubMed Central

    2010-01-01

    Background Mass spectrometry (MS) coupled with online separation methods is commonly applied for differential and quantitative profiling of biological samples in metabolomic as well as proteomic research. Such approaches are used for systems biology, functional genomics, and biomarker discovery, among others. An ongoing challenge of these molecular profiling approaches, however, is the development of better data processing methods. Here we introduce a new generation of a popular open-source data processing toolbox, MZmine 2. Results A key concept of the MZmine 2 software design is the strict separation of core functionality and data processing modules, with emphasis on easy usability and support for high-resolution spectra processing. Data processing modules take advantage of embedded visualization tools, allowing for immediate previews of parameter settings. Newly introduced functionality includes the identification of peaks using online databases, MSn data support, improved isotope pattern support, scatter plot visualization, and a new method for peak list alignment based on the random sample consensus (RANSAC) algorithm. The performance of the RANSAC alignment was evaluated using synthetic datasets as well as actual experimental data, and the results were compared to those obtained using other alignment algorithms. Conclusions MZmine 2 is freely available under a GNU GPL license and can be obtained from the project website at: http://mzmine.sourceforge.net/. The current version of MZmine 2 is suitable for processing large batches of data and has been applied to both targeted and non-targeted metabolomic analyses. PMID:20650010

  4. Effectiveness of higher order thinking skills (HOTS) based i-Think map concept towards primary students

    NASA Astrophysics Data System (ADS)

    Ping, Owi Wei; Ahmad, Azhar; Adnan, Mazlini; Hua, Ang Kean

    2017-05-01

    Higher Order Thinking Skills (HOTS) is a new concept of education reform based on the Taxonomies Bloom. The concept concentrate on student understanding in learning process based on their own methods. Through the HOTS questions are able to train students to think creatively, critic and innovative. The aim of this study was to identify the student's proficiency in solving HOTS Mathematics question by using i-Think map. This research takes place in Sabak Bernam, Selangor. The method applied is quantitative approach that involves approximately all of the standard five students. Pra-posttest was conduct before and after the intervention using i-Think map in solving the HOTS questions. The result indicates significant improvement for post-test, which prove that applying i-Think map enhance the students ability to solve HOTS question. Survey's analysis showed 90% of the students agree having i-Thinking map in analysis the question carefully and using keywords in the map to solve the questions. As conclusion, this process benefits students to minimize in making the mistake when solving the questions. Therefore, teachers are necessarily to guide students in applying the eligible i-Think map and methods in analyzing the question through finding the keywords.

  5. Fluorescence Spectroscopy and Chemometric Modeling for Bioprocess Monitoring

    PubMed Central

    Faassen, Saskia M.; Hitzmann, Bernd

    2015-01-01

    On-line sensors for the detection of crucial process parameters are desirable for the monitoring, control and automation of processes in the biotechnology, food and pharma industry. Fluorescence spectroscopy as a highly developed and non-invasive technique that enables the on-line measurements of substrate and product concentrations or the identification of characteristic process states. During a cultivation process significant changes occur in the fluorescence spectra. By means of chemometric modeling, prediction models can be calculated and applied for process supervision and control to provide increased quality and the productivity of bioprocesses. A range of applications for different microorganisms and analytes has been proposed during the last years. This contribution provides an overview of different analysis methods for the measured fluorescence spectra and the model-building chemometric methods used for various microbial cultivations. Most of these processes are observed using the BioView® Sensor, thanks to its robustness and insensitivity to adverse process conditions. Beyond that, the PLS-method is the most frequently used chemometric method for the calculation of process models and prediction of process variables. PMID:25942644

  6. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  7. On Systems Thinking and Ways of Building It in Learning

    ERIC Educational Resources Information Center

    Abdyrova, Aitzhan; Galiyev, Temir; Yessekeshova, Maral; Aldabergenova, Saule; Alshynbayeva, Zhuldyz

    2016-01-01

    The article focuses on the issue of shaping learners' systems thinking skills in the context of traditional education using specially elaborated system methods that are implemented based on the standard textbook. Applying these methods naturally complements the existing learning process and contributes to an efficient development of learners'…

  8. Do Different Training Conditions Facilitate Team Implementation? A Quasi-Experimental Mixed Methods Study

    ERIC Educational Resources Information Center

    Nielsen, Karina; Randall, Raymond; Christensen, Karl B.

    2017-01-01

    A mixed methods approach was applied to examine the effects of a naturally occurring teamwork intervention supported with training. The first objective was to integrate qualitative process evaluation and quantitative effect evaluation to examine "how" and "why" the training influence intervention outcomes. The intervention (N =…

  9. The Schrodinger Eigenvalue March

    ERIC Educational Resources Information Center

    Tannous, C.; Langlois, J.

    2011-01-01

    A simple numerical method for the determination of Schrodinger equation eigenvalues is introduced. It is based on a marching process that starts from an arbitrary point, proceeds in two opposite directions simultaneously and stops after a tolerance criterion is met. The method is applied to solving several 1D potential problems including symmetric…

  10. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    PubMed

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  11. STATISTICAL ANALYSIS OF SNAP 10A THERMOELECTRIC CONVERTER ELEMENT PROCESS DEVELOPMENT VARIABLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitch, S.H.; Morris, J.W.

    1962-12-15

    Statistical analysis, primarily analysis of variance, was applied to evaluate several factors involved in the development of suitable fabrication and processing techniques for the production of lead telluride thermoelectric elements for the SNAP 10A energy conversion system. The analysis methods are described as to their application for determining the effects of various processing steps, estabIishing the value of individual operations, and evaluating the significance of test results. The elimination of unnecessary or detrimental processing steps was accomplished and the number of required tests was substantially reduced by application of these statistical methods to the SNAP 10A production development effort. (auth)

  12. The Fractional Step Method Applied to Simulations of Natural Convective Flows

    NASA Technical Reports Server (NTRS)

    Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The development of the momentum and continuity equations for natural convection in a fluid, a permeable medium, and in a binary alloy undergoing directional solidification will be presented. Finally, results for natural convection in a pure liquid, natural convection in a medium with a constant permeability, and for directional solidification will be presented.

  13. Generalized interferometry - I: theory for interstation correlations

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; Stehly, Laurent; Ermert, Laura; Boehm, Christian

    2017-02-01

    We develop a general theory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on Earth structure. Not making any restrictive assumptions on the nature of the wavefield sources, our theory can be applied to earthquake and ambient noise data, either separately or combined. This allows us (i) to locate earthquakes using interstation correlations and without knowledge of the origin time, (ii) to unify the earthquake-based two-station method and noise correlations without the need to exclude either of the two data types, and (iii) to eliminate the requirement to remove earthquake signals from noise recordings prior to the computation of correlation functions. In addition to the basic theory for acoustic wavefields, we present numerical examples for 2-D media, an extension to the most general viscoelastic case, and a method for the design of optimal processing schemes that eliminate the forward modelling error completely. This work is intended to provide a comprehensive theoretical foundation of full-waveform interferometry by correlation, and to suggest improvements to current passive monitoring methods.

  14. Big Data Usage Patterns in the Health Care Domain: A Use Case Driven Approach Applied to the Assessment of Vaccination Benefits and Risks

    PubMed Central

    Liyanage, H.; Liaw, S-T.; Kuziemsky, C.; Mold, F.; Krause, P.; Fleming, D.; Jones, S.

    2014-01-01

    Summary Background Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance. PMID:25123718

  15. An Ensemble Framework Coping with Instability in the Gene Selection Process.

    PubMed

    Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M

    2018-03-01

    This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.

  16. Microwave-assisted synthesis of carbon supported metal/metal oxide nanocomposites and their application in water purification

    NASA Astrophysics Data System (ADS)

    Gunawan, Gunawan

    A novel, easy, and cost effective method for synthesizing carbon supported metal/metal oxide nanocomposites has been studied. Carbon supported metal/metal oxide nanocomposites have niche applications in the area of catalysis, fuel cells, electrodes, and more. The method utilizes a commercial microwave and features the addition of a developed graphite-jacket technique with renewable carbon resources, tannin and lignin. The method has been successfully used to synthesize carbon/nickel, carbon/iron oxide, and carbon/nickel phosphide nanocomposites. The method has shown its versatility in the synthesis of carbon nanocomposites. The process is much simpler when compared with the available methods for synthesizing carbon nanocomposites. The synthesized nanocomposites were classified using several characterization techniques, such as electron microscopy, X-ray powder diffraction, surface area analysis, thermogravimetric analysis, and spectrophotometric studies. One application of the carbon nanocomposite is in wastewater remediation. The synthesized carbon/iron oxide nanocomposite was noted as being useful for removing arsenic (As) and phosphorus (P) from contaminated water. The adsorption process of the nanocomposite was critically studied in order to understand the process of removing pollutants from contaminated water. The study shows that the nanocomposites are capable of removing As and P from contaminated water. Kinetic and adsorption isotherm studies were applied to understand the adsorption of As and P onto the adsorbent. Several methods, such as pseudo-first and second order kinetic models, Elovich's equation, and the Weber-Morris intraparticle diffusion model were used to explain the kinetic aspects of the adsorption process. For the adsorption isotherm study, Langmuir and Freundlich isotherm models were applied.

  17. Developing an expert panel process to refine health outcome definitions in observational data.

    PubMed

    Fox, Brent I; Hollingsworth, Joshua C; Gray, Michael D; Hollingsworth, Michael L; Gao, Juan; Hansen, Richard A

    2013-10-01

    Drug safety surveillance using observational data requires valid adverse event, or health outcome of interest (HOI) measurement. The objectives of this study were to develop a method to review HOI definitions in claims databases using (1) web-based digital tools to present de-identified patient data, (2) a systematic expert panel review process, and (3) a data collection process enabling analysis of concepts-of-interest that influence panelists' determination of HOI. De-identified patient data were presented via an interactive web-based dashboard to enable case review and determine if specific HOIs were present or absent. Criteria for determining HOIs and their severity were provided to each panelist. Using a modified Delphi method, six panelist pairs independently reviewed approximately 200 cases across each of three HOIs (acute liver injury, acute kidney injury, and acute myocardial infarction) such that panelist pairs independently reviewed the same cases. Panelists completed an assessment within the dashboard for each case that included their assessment of the presence or absence of the HOI, HOI severity (if present), and data contributing to their decision. Discrepancies within panelist pairs were resolved during a consensus process. Dashboard development was iterative, focusing on data presentation and recording panelists' assessments. Panelists reported quickly learning how to use the dashboard. The assessment module was used consistently. The dashboard was reliable, enabling an efficient review process for panelists. Modifications were made to the dashboard and review process when necessary to facilitate case review. Our methods should be applied to other health outcomes of interest to further refine the dashboard and case review process. The expert review process was effective and was supported by the web-based dashboard. Our methods for case review and classification can be applied to future methods for case identification in observational data sources. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  19. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  20. Low cost MATLAB-based pulse oximeter for deployment in research and development applications.

    PubMed

    Shokouhian, M; Morling, R C S; Kale, I

    2013-01-01

    Problems such as motion artifact and effects of ambient lights have forced developers to design different signal processing techniques and algorithms to increase the reliability and accuracy of the conventional pulse oximeter device. To evaluate the robustness of these techniques, they are applied either to recorded data or are implemented on chip to be applied to real-time data. Recorded data is the most common method of evaluating however it is not as reliable as real-time measurements. On the other hand, hardware implementation can be both expensive and time consuming. This paper presents a low cost MATLAB-based pulse oximeter that can be used for rapid evaluation of newly developed signal processing techniques and algorithms. Flexibility to apply different signal processing techniques, providing both processed and unprocessed data along with low implementation cost are the important features of this design which makes it ideal for research and development purposes, as well as commercial, hospital and healthcare application.

  1. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples.

    PubMed

    Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K

    2015-12-01

    There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. © 2015 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc.

  2. Application of computational methods to analyse and investigate physical and chemical processes of high-temperature mineralizing of condensed substances in gas stream

    NASA Astrophysics Data System (ADS)

    Markelov, A. Y.; Shiryaevskii, V. L.; Kudrinskiy, A. A.; Anpilov, S. V.; Bobrakov, A. N.

    2017-11-01

    A computational method of analysis of physical and chemical processes of high-temperature mineralizing of low-level radioactive waste in gas stream in the process of plasma treatment of radioactive waste in shaft furnaces was introduced. It was shown that the thermodynamic simulation method allows fairly adequately describing the changes in the composition of the pyrogas withdrawn from the shaft furnace at different waste treatment regimes. This offers a possibility of developing environmentally and economically viable technologies and small-sized low-cost facilities for plasma treatment of radioactive waste to be applied at currently operating nuclear power plants.

  3. [Effect of gas-turbine green discoloring and drying processing methods on herbal quality of tetraploid Lonicerae Japonicae Flos].

    PubMed

    Hu, Xuan; Li, Wei-dong; Li, Ou; Hao, Jiang-bo; Liu, Jia-kun

    2012-09-01

    To study the effect of gas-turbine green discoloring and drying processing method on the quality of various Lonicerae Japonicae Flos herbs. DIKMA DiamonsilTM-C18 column (4.6 mm x 250 mm, 5 microm) was adopted using HPLC Waters 1525 and eluted with acetonitrile and 0.1% phosphate acid as the mobile phase. The flow rate was 1.0 mL x min(-1) , the column temperature was 25 degrees C the detection wavelength was 355 nm. After being processed by the gas-turbine green discoloring and drying method, tetraploid Lonicerae Japonicae Flos showed a green color. The contents of chlorogenic acid and galuteolin were 5.31% and 0.105% , both significantly higher by 18.0% and 32.1% than those of diploid Lonicerae Japonicae Flos processed by the same method. The content of chlorogenic acid in tetraploid Lonicerae Japonicae Flos processed the gas-turbine green discoloring and drying method were also remarkably higher than that of tetraploid and diploid Lonicerae Japonicae Flos processed by traditional processing method of natural drying. The gas-turbine green discoloring and drying processing method is a new-type drying method suitable for tetraploid Lonicerae Japonicae Flos. Under the condition of gas-turbine green discoloring and drying processing, tetraploid Lonicerae Japonicae Flos shows much higher quality than Lonicerae Japonicae Flos, suggesting that it is a good variety worth popularizing and applying.

  4. Toward an applied technology for quality measurement in health care.

    PubMed

    Berwick, D M

    1988-01-01

    Cost containment, financial incentives to conserve resources, the growth of for-profit hospitals, an aggressive malpractice environment, and demands from purchasers are among the forces today increasing the need for improved methods that measure quality in health care. At the same time, increasingly sophisticated databases and the existence of managed care systems yield new opportunities to observe and correct quality problems. Research on targets of measurement (structure, process, and outcome) and methods of measurement (implicit, explicit, and sentinel methods) has not yet produced managerially useful applied technology for quality measurement in real-world settings. Such an applied technology would have to be cheaper, faster, more flexible, better reported, and more multidimensional than the majority of current research on quality assurance. In developing a new applied technology for the measurement of health care quality, quantitative disciplines have much to offer, such as decision support systems, criteria based on rigorous decision analyses, utility theory, tools for functional status measurement, and advances in operations research.

  5. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  6. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    PubMed

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  7. On the electrophysiology of aesthetic processing.

    PubMed

    Jacobsen, Thomas

    2013-01-01

    One important method that can be applied for gaining an understanding of the underpinning of aesthetics in the brain is that of electrophysiology. Cognitive electrophysiology, in particular, allows the identification of components in a mental processing architecture. The present chapter reviews findings in the neurocognitive psychology of aesthetics, or neuroaesthetics, that have been obtained with the method of event-related brain potentials, as derived from the human electroencephalogram. The cognitive-perceptual bases as well as affective substages of aesthetic processing have been investigated and those are described here. The event-related potential method allows for the identification of mental processing modes in cognitive and aesthetic processing. It also provides an assessment of the mental chronometry of cognitive and affective stages in aesthetic appreciation. As the work described here shows, distinct processes in the brain are engaged in aesthetic judgments. © 2013 Elsevier B.V. All rights reserved.

  8. Application of fuzzy neural network technologies in management of transport and logistics processes in Arctic

    NASA Astrophysics Data System (ADS)

    Levchenko, N. G.; Glushkov, S. V.; Sobolevskaya, E. Yu; Orlov, A. P.

    2018-05-01

    The method of modeling the transport and logistics process using fuzzy neural network technologies has been considered. The analysis of the implemented fuzzy neural network model of the information management system of transnational multimodal transportation of the process showed the expediency of applying this method to the management of transport and logistics processes in the Arctic and Subarctic conditions. The modular architecture of this model can be expanded by incorporating additional modules, since the working conditions in the Arctic and the subarctic themselves will present more and more realistic tasks. The architecture allows increasing the information management system, without affecting the system or the method itself. The model has a wide range of application possibilities, including: analysis of the situation and behavior of interacting elements; dynamic monitoring and diagnostics of management processes; simulation of real events and processes; prediction and prevention of critical situations.

  9. Improvement of pre-treatment method for 36Cl/Cl measurement of Cl in natural groundwater by AMS

    NASA Astrophysics Data System (ADS)

    Nakata, Kotaro; Hasegawa, Takuma

    2011-02-01

    Estimation of 36Cl/Cl by accelerator mass spectrometry (AMS) is a useful method to trace hydrological processes in groundwater. For accurate estimation, separation of SO42- from Cl - in groundwater is required because 36S affects AMS measurement of 36Cl. Previous studies utilized the difference in solubility between BaSO 4 and BaCl 2 (BaSO 4 method) to chemically separate SO42- from Cl -. However, the accuracy of the BaSO 4 method largely depends on operator skill, and consequently Cl - recovery is typically incomplete (70-80%). In addition, the method is time consuming (>1 week), and cannot be applied directly to dilute solutions. In this study, a method based on ion-exchange column chromatography (column method) was developed for separation of Cl - and SO42-. Optimum conditions were determined for the diameter and height of column, type and amount of resin, type and concentration of eluent, and flow rate. The recovery of Cl - was almost 100%, which allowed complete separation from SO42-. The separation procedure was short (<6 h), and was successfully applied to dilute (1 mg/L Cl) solution. Extracted pore water and diluted seawater samples were processed by the column and BaSO 4 methods, and then analyzed by AMS to estimate 36S counts and 36Cl/Cl values. 36S counts in samples processed by the column method were stable and lower than those from the BaSO 4 method. The column method has the following advantages over the BaSO 4 method: (1) complete and stable separation of Cl - and SO42-, (2) less operator influence on results, (3) short processing time (<6 h), (4) high (almost 100%) recovery of Cl -, and (5) concentration of Cl - and separation from SO42- in the one system for dilute solutions.

  10. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  11. Discrete State Change Model of Manufacturing Quality to Aid Assembly Process Design

    NASA Astrophysics Data System (ADS)

    Koga, Tsuyoshi; Aoyama, Kazuhiro

    This paper proposes a representation model of the quality state change in an assembly process that can be used in a computer-aided process design system. In order to formalize the state change of the manufacturing quality in the assembly process, the functions, operations, and quality changes in the assembly process are represented as a network model that can simulate discrete events. This paper also develops a design method for the assembly process. The design method calculates the space of quality state change and outputs a better assembly process (better operations and better sequences) that can be used to obtain the intended quality state of the final product. A computational redesigning algorithm of the assembly process that considers the manufacturing quality is developed. The proposed method can be used to design an improved manufacturing process by simulating the quality state change. A prototype system for planning an assembly process is implemented and applied to the design of an auto-breaker assembly process. The result of the design example indicates that the proposed assembly process planning method outputs a better manufacturing scenario based on the simulation of the quality state change.

  12. The Application of Lean Thinking to the Care of Patients With Bone and Brain Metastasis With Radiation Therapy

    PubMed Central

    Kim, Christopher S.; Hayman, James A.; Billi, John E.; Lash, Kathy; Lawrence, Theodore S.

    2007-01-01

    Purpose Patients with bone and brain metastases are among the most symptomatic nonemergency patients treated by radiation oncologists. Treatment should begin as soon as possible after the request is generated. We tested the hypothesis that the operational improvement method based on lean thinking could help streamline the treatment of our patients referred for bone and brain metastases. Methods University of Michigan Health System has adopted lean thinking as a consistent approach to quality and process improvement. We applied the principles and tools of lean thinking, especially value as defined by the customer, value stream mapping processes, and one piece flow, to improve the process of delivering care to patients referred for bone or brain metastases. Results and Conclusion The initial evaluation of the process revealed that it was rather chaotic and highly variable. Implementation of the lean thinking principles permitted us to improve the process by cutting the number of individual steps to begin treatment from 27 to 16 and minimize variability by applying standardization. After an initial learning period, the percentage of new patients with brain or bone metastases receiving consultation, simulation, and treatment within the same day rose from 43% to nearly 95%. By implementing the ideas of lean thinking, we improved the delivery of clinical care for our patients with bone or brain metastases. We believe these principles can be applied to much of the care administered throughout our and other health care delivery areas. PMID:20859409

  13. Unfolding and unfoldability of digital pulses in the z-domain

    NASA Astrophysics Data System (ADS)

    Regadío, Alberto; Sánchez-Prieto, Sebastián

    2018-04-01

    The unfolding (or deconvolution) technique is used in the development of digital pulse processing systems applied to particle detection. This technique is applied to digital signals obtained by digitization of analog signals that represent the combined response of the particle detectors and the associated signal conditioning electronics. This work describes a technique to determine if the signal is unfoldable. For unfoldable signals the characteristics of the unfolding system (unfolder) are presented. Finally, examples of the method applied to real experimental setup are discussed.

  14. Nowcasting Cloud Fields for U.S. Air Force Special Operations

    DTIC Science & Technology

    2017-03-01

    application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES

  15. The Old Brain, the New Mirror: Matching Teaching and Learning Styles in Foreign Language Class (Based on Neuro-Linguistic Programming).

    ERIC Educational Resources Information Center

    Knowles, John K.

    The process of matching teaching materials and methods to the student's learning style and ability level in foreign language classes is explored. The Neuro-Linguistic Programming (NLP) model offers a diagnostic process for the identification of style. This process can be applied to the language learning setting as a way of presenting material to…

  16. SCOUP: a probabilistic model based on the Ornstein-Uhlenbeck process to analyze single-cell expression data during differentiation.

    PubMed

    Matsumoto, Hirotaka; Kiryu, Hisanori

    2016-06-08

    Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.

  17. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  18. Inside the Black Box: Revealing the Process in Applying a Grounded Theory Analysis

    ERIC Educational Resources Information Center

    Rich, Peter

    2012-01-01

    Qualitative research methods have long set an example of rich description, in which data and researchers' hermeneutics work together to inform readers of findings in specific contexts. Among published works, insight into the analytical process is most often represented in the form of methodological propositions or research results. This paper…

  19. Reading Students' Minds: Design Assessment in Distance Education

    ERIC Educational Resources Information Center

    Jones, Derek

    2014-01-01

    This paper considers the design of assessment for students of design according to behaviourist versus experiential pedagogical approaches, relating these to output-oriented as opposed to process-oriented assessment methods. It is part case study and part recognition of the importance of process in design education and how this might be applied in…

  20. Coding the Composing Process: A Guide for Teachers and Researchers.

    ERIC Educational Resources Information Center

    Perl, Sondra

    Designed for teachers and researchers interested in the study of the composing process, this guide introduces a method of analysis that can be applied to data from a range of different cases. Specifically, the guide offers a simple, direct coding scheme for describing the movements occurring during composing that involves four procedures: teaching…

  1. METHOD-SPECIFIC PRECISION AND BIAS RELATIONSHIPS DEVELOPED FROM DATA SUBMITTED DURING USEPA DRINKING WATER LABORATORY PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    This paper documents the process used by the United States Environmental Protection Agency (USEPA) to estimate the mean and standard deviation of data reported by in-control drinking water laboratories during Water Supply (WS) studies. This process is then applied to the data re...

  2. METHOD-SPECIFIC PRECISION AND BIAS RELATIONSHIPS DEVELOPED FROM DATA SUBMITTED DURING USEPA WASTEWATER LABORATORY PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    This paper documents the process used by the United States Environmental Protection Agency (USEPA) to estimate the mean and standard deviation of data reported by in-control wastewater laboratories during Water Pollution (WP) studies. This process is then applied to the data rep...

  3. A Formal Construction of Term Classes. Technical Report No. TR73-18.

    ERIC Educational Resources Information Center

    Yu, Clement T.

    The computational complexity of a formal process for the construction of term classes for information retrieval is examined. While the process is proven to be difficult computationally, heuristic methods are applied. Experimental results are obtained to illustrate the maximum possible improvement in system performance of retrieval using the formal…

  4. Formation of wood-plastic composites coupled with forest products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meister, J.J.; Zhang, Siyi

    We have developed a method to formulate (wood/paper)-plastic composites and developed a process to prepare materials with maximum strength, durability, and rigidity. We are applying the experience gained from our research to the preparation of wood reinforced, plastic blends. The steps in the process of making wood/plastic composites are described.

  5. Applying the Decoding the Disciplines Process to Teaching Structural Mechanics: An Autoethnographic Case Study

    ERIC Educational Resources Information Center

    Tingerthal, John Steven

    2013-01-01

    Using case study methodology and autoethnographic methods, this study examines a process of curricular development known as "Decoding the Disciplines" (Decoding) by documenting the experience of its application in a construction engineering mechanics course. Motivated by the call to integrate what is known about teaching and learning…

  6. Pasteurization of strawberry puree using a pilot plant pulsed electric fields (PEF) system

    USDA-ARS?s Scientific Manuscript database

    The processing of strawberry puree by pulsed electric fields (PEF) in a pilot plant system has never been evaluated. In addition, a method does not exist to validate the exact number and shape of the pulses applied during PEF processing. Both buffered peptone water (BPW) and fresh strawberry puree (...

  7. Improving Survey Methods with Cognitive Interviews in Small- and Medium-Scale Evaluations

    ERIC Educational Resources Information Center

    Ryan, Katherine; Gannon-Slater, Nora; Culbertson, Michael J.

    2012-01-01

    Findings derived from self-reported, structured survey questionnaires are commonly used in evaluation and applied research to inform policy-making and program decisions. Although there are a variety of issues related to the quality of survey evidence (e.g., sampling precision), the validity of response processes--how respondents process thoughts…

  8. Restorative Justice as Reflective Practice and Applied Pedagogy on College Campuses

    ERIC Educational Resources Information Center

    Rinker, Jeremy A.; Jonason, Chelsey

    2014-01-01

    Restorative justice (RJ) is both a methodology for dealing with conflict and a process for modeling more positive human relations after social harm. As both method and process, the benefits of developing restorative practices on college campuses go well beyond just the many positive community-oriented outcomes of facilitated conflict resolution…

  9. Expanding the Targeting Process into the Space Domain

    DTIC Science & Technology

    2008-06-01

    planning and operations. The process is a continuous method by which information is converted into intelligence and made available to users...Targeting personnel and organizations consume intelligence produced by various agencies and organizations. Actionable and predictive intelligence applies to... intelligence and operations communities (Figure 1). 1 United States Department of Defense Joint

  10. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    NASA Astrophysics Data System (ADS)

    Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel

    In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.

  11. Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.

  12. Functional evaluations in the monitoring of the river ecosystem processes: the Adige River as a case stu.

    PubMed

    Braioni, M G; Salmoiraghi, G; Bracco, F; Villani, M; Braioni, A; Girelli, L

    2002-03-12

    A model of analysis and environmental evaluation was applied to 11 stretches of the Adige River, where an innovative procedure was carried out to interpret ecological results. Within each stretch, the most suitable methods were used to assess the quality and processes of flood plains, banks, water column, bed, and interstitial environment. Indices were applied to evaluate the wild state and ecological quality of the banks (wild state index, buffer strip index) and the landscape quality of wide areas of the fluvial corridor (environmental landscape index). The biotic components (i.e., macrozoobenthos, phytoplankton and zooplankton, interstitial hyporheic fauna, vegetation in the riparian areas) were analysed by both quantitative and functional methods (as productivity, litter--processing and colonisation). The results achieved were then translated into five classes of functional evaluation. These qualitative assessments have thus preserved a high level of precision and sensitivity in quantifying both the quality of the environmental conditions and the integrity of the ecosystem processes. Read together with urban planning data, they indicate what actions are needed to restore and rehabilitate the Adige River corridor.

  13. Automated segmentation of three-dimensional MR brain images

    NASA Astrophysics Data System (ADS)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  14. Signal processing techniques for damage detection with piezoelectric wafer active sensors and embedded ultrasonic structural radar

    NASA Astrophysics Data System (ADS)

    Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor

    2004-07-01

    Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.

  15. Space processing economics

    NASA Technical Reports Server (NTRS)

    Bredt, J. H.

    1974-01-01

    Two types of space processing operations may be considered economically justified; they are manufacturing operations that make profits and experiment operations that provide needed applied research results at lower costs than those of alternative methods. Some examples from the Skylab experiments suggest that applied research should become cost effective soon after the space shuttle and Spacelab become operational. In space manufacturing, the total cost of space operations required to process materials must be repaid by the value added to the materials by the processing. Accurate estimates of profitability are not yet possible because shuttle operational costs are not firmly established and the markets for future products are difficult to estimate. However, approximate calculations show that semiconductor products and biological preparations may be processed on a scale consistent with market requirements and at costs that are at least compatible with profitability using the Shuttle/Spacelab system.

  16. Improvement of Microtremor Data Filtering and Processing Methods Used in Determining the Fundamental Frequency of Urban Areas

    NASA Astrophysics Data System (ADS)

    Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra

    2015-10-01

    The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.

  17. A Systematic Approach to Applying Lean Techniques to Optimize an Office Process at the Y-12 National Security Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Credille, Jennifer; Owens, Elizabeth

    This capstone offers the introduction of Lean concepts to an office activity to demonstrate the versatility of Lean. Traditionally Lean has been associated with process improvements as applied to an industrial atmosphere. However, this paper will demonstrate that implementing Lean concepts within an office activity can result in significant process improvements. Lean first emerged with the conception of the Toyota Production System. This innovative concept was designed to improve productivity in the automotive industry by eliminating waste and variation. Lean has also been applied to office environments, however the limited literature reveals most Lean techniques within an office are restrictedmore » to one or two techniques. Our capstone confronts these restrictions by introducing a systematic approach that utilizes multiple Lean concepts. The approach incorporates: system analysis, system reliability, system requirements, and system feasibility. The methodical Lean outline provides tools for a successful outcome, which ensures the process is thoroughly dissected and can be achieved for any process in any work environment.« less

  18. Parametric system identification of catamaran for improving controller design

    NASA Astrophysics Data System (ADS)

    Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai

    2018-01-01

    This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.

  19. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  20. Subject-level reliability analysis of fast fMRI with application to epilepsy.

    PubMed

    Hao, Yongfu; Khoo, Hui Ming; von Ellenrieder, Nicolas; Gotman, Jean

    2017-07-01

    Recent studies have applied the new magnetic resonance encephalography (MREG) sequence to the study of interictal epileptic discharges (IEDs) in the electroencephalogram (EEG) of epileptic patients. However, there are no criteria to quantitatively evaluate different processing methods, to properly use the new sequence. We evaluated different processing steps of this new sequence under the common generalized linear model (GLM) framework by assessing the reliability of results. A bootstrap sampling technique was first used to generate multiple replicated data sets; a GLM with different processing steps was then applied to obtain activation maps, and the reliability of these maps was assessed. We applied our analysis in an event-related GLM related to IEDs. A higher reliability was achieved by using a GLM with head motion confound regressor with 24 components rather than the usual 6, with an autoregressive model of order 5 and with a canonical hemodynamic response function (HRF) rather than variable latency or patient-specific HRFs. Comparison of activation with IED field also favored the canonical HRF, consistent with the reliability analysis. The reliability analysis helps to optimize the processing methods for this fast fMRI sequence, in a context in which we do not know the ground truth of activation areas. Magn Reson Med 78:370-382, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Fragment-based Quantum Mechanical/Molecular Mechanical Simulations of Thermodynamic and Kinetic Process of the Ru2+-Ru3+ Self-Exchange Electron Transfer.

    PubMed

    Zeng, Xiancheng; Hu, Xiangqian; Yang, Weitao

    2012-12-11

    A fragment-based fractional number of electron (FNE) approach, is developed to study entire electron transfer (ET) processes from the electron donor region to the acceptor region in condensed phase. Both regions are described by the density-fragment interaction (DFI) method while FNE as an efficient ET order parameter is applied to simulate the electron transfer process. In association with the QM/MM energy expression, the DFI-FNE method is demonstrated to describe ET processes robustly with the Ru 2+ -Ru 3+ self-exchange ET as a proof-of-concept example. This method allows for systematic calculations of redox free energies, reorganization energies, and electronic couplings, and the absolute ET rate constants within the Marcus regime.

  3. Identifying pathogenic processes by integrating microarray data with prior knowledge

    PubMed Central

    2014-01-01

    Background It is of great importance to identify molecular processes and pathways that are involved in disease etiology. Although there has been an extensive use of various high-throughput methods for this task, pathogenic pathways are still not completely understood. Often the set of genes or proteins identified as altered in genome-wide screens show a poor overlap with canonical disease pathways. These findings are difficult to interpret, yet crucial in order to improve the understanding of the molecular processes underlying the disease progression. We present a novel method for identifying groups of connected molecules from a set of differentially expressed genes. These groups represent functional modules sharing common cellular function and involve signaling and regulatory events. Specifically, our method makes use of Bayesian statistics to identify groups of co-regulated genes based on the microarray data, where external information about molecular interactions and connections are used as priors in the group assignments. Markov chain Monte Carlo sampling is used to search for the most reliable grouping. Results Simulation results showed that the method improved the ability of identifying correct groups compared to traditional clustering, especially for small sample sizes. Applied to a microarray heart failure dataset the method found one large cluster with several genes important for the structure of the extracellular matrix and a smaller group with many genes involved in carbohydrate metabolism. The method was also applied to a microarray dataset on melanoma cancer patients with or without metastasis, where the main cluster was dominated by genes related to keratinocyte differentiation. Conclusion Our method found clusters overlapping with known pathogenic processes, but also pointed to new connections extending beyond the classical pathways. PMID:24758699

  4. [Clinical decision making: Fostering critical thinking in the nursing diagnostic process through case studies].

    PubMed

    Müller-Staub, Maria; Stuker-Studer, Ursula

    2006-10-01

    Case studies, based on actual patients' situations, provide a method of clinical decision making to foster critical thinking in nurses. This paper describes the method and process of group case studies applied in continuous education settings. This method bases on Balints' case supervision and was further developed and combined with the nursing diagnostic process. A case study contains different phases: Pre-phase, selection phase, case delineation and case work. The case provider narratively tells the situation of a patient. This allows the group to analyze and cluster signs and symptoms, to state nursing diagnoses and to derive nursing interventions. Results of the case study are validated by applying the theoretical background and critical appraisal of the case provider. Learning effects of the case studies were evaluated by means of qualitative questionnaires and analyzed according to Mayring. Findings revealed the following categories: a) Patients' problems are perceived in a patient centred way, accurate nursing diagnoses are stated and effective nursing interventions implemented. b) Professional nursing tasks are more purposefully perceived and named more precise. c) Professional nursing relationship, communication and respectful behaviour with patients were perceived in differentiated ways. The theoretical framework is described in the paper "Clinical decision making and critical thinking in the nursing diagnostic process". (Müller-Staub, 2006).

  5. Modeling virtual organizations with Latent Dirichlet Allocation: a case for natural language processing.

    PubMed

    Gross, Alexander; Murthy, Dhiraj

    2014-10-01

    This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Natural language processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that natural language processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that natural language processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Comparison of infrared and 3D digital image correlation techniques applied for mechanical testing of materials

    NASA Astrophysics Data System (ADS)

    Krstulović-Opara, Lovre; Surjak, Martin; Vesenjak, Matej; Tonković, Zdenko; Kodvanj, Janoš; Domazet, Željko

    2015-11-01

    To investigate the applicability of infrared thermography as a tool for acquiring dynamic yielding in metals, a comparison of infrared thermography with three dimensional digital image correlation has been made. Dynamical tension tests and three point bending tests of aluminum alloys have been performed to evaluate results obtained by IR thermography in order to detect capabilities and limits for these two methods. Both approaches detect pastification zone migrations during the yielding process. The results of the tension test and three point bending test proved the validity of the IR approach as a method for evaluating the dynamic yielding process when used on complex structures such as cellular porous materials. The stability of the yielding process in the three point bending test, as contrary to the fluctuation of the plastification front in the tension test, is of great importance for the validation of numerical constitutive models. The research proved strong performance, robustness and reliability of the IR approach when used to evaluate yielding during dynamic loading processes, while the 3D DIC method proved to be superior in the low velocity loading regimes. This research based on two basic tests, proved the conclusions and suggestions presented in our previous research on porous materials where middle wave infrared thermography was applied.

  7. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  8. The development of a method of producing etch resistant wax patterns on solar cells

    NASA Technical Reports Server (NTRS)

    Pastirik, E.

    1980-01-01

    A potentially attractive technique for wax masking of solar cells prior to etching processes was studied. This technique made use of a reuseable wax composition which was applied to the solar cell in patterned form by means of a letterpress printing method. After standard wet etching was performed, wax removal by means of hot water was investigated. Application of the letterpress wax printing process to silicon was met with a number of difficulties. The most serious shortcoming of the process was its inability to produce consistently well-defined printed patterns on the hard silicon cell surface.

  9. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  10. Some comments on Hurst exponent and the long memory processes on capital markets

    NASA Astrophysics Data System (ADS)

    Sánchez Granero, M. A.; Trinidad Segovia, J. E.; García Pérez, J.

    2008-09-01

    The analysis of long memory processes in capital markets has been one of the topics in finance, since the existence of the market memory could implicate the rejection of an efficient market hypothesis. The study of these processes in finance is realized through Hurst exponent and the most classical method applied is R/S analysis. In this paper we will discuss the efficiency of this methodology as well as some of its more important modifications to detect the long memory. We also propose the application of a classical geometrical method with short modifications and we compare both approaches.

  11. Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Hobbs, Benjamin F.; Chankong, Vira; Hamadeh, Wael; Stakhiv, Eugene Z.

    1992-07-01

    Many multiple criteria decision making methods have been proposed and applied to water planning. Their purpose is to provide information on tradeoffs among objectives and to help users articulate value judgments in a systematic, coherent, and documentable manner. The wide variety of available techniques confuses potential users, causing inappropriate matching of methods with problems. Experiments in which water planners apply more than one multicriteria procedure to realistic problems can help dispel this confusion by testing method appropriateness, ease of use, and validity. We summarize one such experiment where U.S. Army Corps of Engineers personnel used several methods to screen urban water supply plans. The methods evaluated include goal programming, ELECTRE I, additive value functions, multiplicative utility functions, and three techniques for choosing weights (direct rating, indifference tradeoff, and the analytical hierarchy process). Among the conclusions we reach are the following. First, experienced planners generally prefer simpler, more transparent methods. Additive value functions are favored. Yet none of the methods are endorsed by a majority of the participants; many preferred to use no formal method at all. Second, there is strong evidence that rating, the most commonly applied weight selection method, is likely to lead to weights that fail to represent the trade-offs that users are willing to make among criteria. Finally, we show that decisions can be as or more sensitive to the method used as to which person applies it. Therefore, if who chooses is important, then so too is how a choice is made.

  12. Processing Electromyographic Signals to Recognize Words

    NASA Technical Reports Server (NTRS)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  13. Isotope Identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpius, Peter Joseph

    2017-09-18

    The objective of this training modules is to examine the process of using gamma spectroscopy for radionuclide identification; apply pattern recognition to gamma spectra; identify methods of verifying energy calibration; and discuss potential causes of isotope misidentification.

  14. Generalised Pareto distribution: impact of rounding on parameter estimation

    NASA Astrophysics Data System (ADS)

    Pasarić, Z.; Cindrić, K.

    2018-05-01

    Problems that occur when common methods (e.g. maximum likelihood and L-moments) for fitting a generalised Pareto (GP) distribution are applied to discrete (rounded) data sets are revealed by analysing the real, dry spell duration series. The analysis is subsequently performed on generalised Pareto time series obtained by systematic Monte Carlo (MC) simulations. The solution depends on the following: (1) the actual amount of rounding, as determined by the actual data range (measured by the scale parameter, σ) vs. the rounding increment (Δx), combined with; (2) applying a certain (sufficiently high) threshold and considering the series of excesses instead of the original series. For a moderate amount of rounding (e.g. σ/Δx ≥ 4), which is commonly met in practice (at least regarding the dry spell data), and where no threshold is applied, the classical methods work reasonably well. If cutting at the threshold is applied to rounded data—which is actually essential when dealing with a GP distribution—then classical methods applied in a standard way can lead to erroneous estimates, even if the rounding itself is moderate. In this case, it is necessary to adjust the theoretical location parameter for the series of excesses. The other solution is to add an appropriate uniform noise to the rounded data ("so-called" jittering). This, in a sense, reverses the process of rounding; and thereafter, it is straightforward to apply the common methods. Finally, if the rounding is too coarse (e.g. σ/Δx 1), then none of the above recipes would work; and thus, specific methods for rounded data should be applied.

  15. Finding Major Patterns of Aging Process by Data Synchronization

    NASA Astrophysics Data System (ADS)

    Miyano, Takaya; Tsutsui, Takako

    We developed a method for extracting feature patterns from multivariate data using a network of coupled phase oscillators subject to an analogue of the Kuramoto model for collective synchronization. Our method may be called data synchronization. We applied data synchronization to the care-needs-certification data, provided by Otsu City as a historical old city near Kyoto City, in the Japanese public long-term care insurance program to find the trend of the major patterns of the aging process for elderly people needing nursing care.

  16. SNMR pulse sequence phase cycling

    DOEpatents

    Walsh, David O; Grunewald, Elliot D

    2013-11-12

    Technologies applicable to SNMR pulse sequence phase cycling are disclosed, including SNMR acquisition apparatus and methods, SNMR processing apparatus and methods, and combinations thereof. SNMR acquisition may include transmitting two or more SNMR pulse sequences and applying a phase shift to a pulse in at least one of the pulse sequences, according to any of a variety cycling techniques. SNMR processing may include combining SNMR from a plurality of pulse sequences comprising pulses of different phases, so that desired signals are preserved and indesired signals are canceled.

  17. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    NASA Astrophysics Data System (ADS)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  18. Detection of fuze defects by image-processing methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, M.J.

    1988-03-01

    This paper describes experimental studies of the detection of mechanical defects by the application of computer-processing methods to real-time radiographic images of fuze assemblies. The experimental results confirm that a new algorithm developed at Materials Research Laboratory has potential for the automatic inspection of these assemblies and of others that contain discrete components. The algorithm was applied to images that contain a range of grey levels and has been found to be tolerant to image variations encountered under simulated production conditions.

  19. Satellite radar altimetry over ice. Volume 1: Processing and corrections of Seasat data over Greenland

    NASA Technical Reports Server (NTRS)

    Zwally, H. Jay; Brenner, Anita C.; Major, Judith A.; Martin, Thomas V.; Bindschadler, Robert A.

    1990-01-01

    The data-processing methods and ice data products derived from Seasat radar altimeter measurements over the Greenland ice sheet and surrounding sea ice are documented. The corrections derived and applied to the Seasat radar altimeter data over ice are described in detail, including the editing and retracking algorithm to correct for height errors caused by lags in the automatic range tracking circuit. The methods for radial adjustment of the orbits and estimation of the slope-induced errors are given.

  20. Investigation of sulphur isotope variation due to different processes applied during uranium ore concentrate production.

    PubMed

    Krajkó, Judit; Varga, Zsolt; Wallenius, Maria; Mayer, Klaus; Konings, Rudy

    The applicability and limitations of sulphur isotope ratio as a nuclear forensic signature have been studied. The typically applied leaching methods in uranium mining processes were simulated for five uranium ore samples and the n ( 34 S)/ n ( 32 S) ratios were measured. The sulphur isotope ratio variation during uranium ore concentrate (UOC) production was also followed using two real-life sample sets obtained from industrial UOC production facilities. Once the major source of sulphur is revealed, its appropriate application for origin assessment can be established. Our results confirm the previous assumption that process reagents have a significant effect on the n ( 34 S)/ n ( 32 S) ratio, thus the sulphur isotope ratio is in most cases a process-related signature.

  1. Directional dual-tree rational-dilation complex wavelet transform.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2014-01-01

    Dyadic discrete wavelet transform (DWT) has been used successfully in processing signals having non-oscillatory transient behaviour. However, due to the low Q-factor property of their wavelet atoms, the dyadic DWT is less effective in processing oscillatory signals such as embolic signals (ESs). ESs are extracted from quadrature Doppler signals, which are the output of Doppler ultrasound systems. In order to process ESs, firstly, a pre-processing operation known as phase filtering for obtaining directional signals from quadrature Doppler signals must be employed. Only then, wavelet based methods can be applied to these directional signals for further analysis. In this study, a directional dual-tree rational-dilation complex wavelet transform, which can be applied directly to quadrature signals and has the ability of extracting directional information during analysis, is introduced.

  2. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  3. [Detecting fire smoke based on the multispectral image].

    PubMed

    Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei

    2010-04-01

    Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.

  4. Estimating actual evapotranspiration from remote sensing imagery using R: the package 'TriangleMethod'.

    NASA Astrophysics Data System (ADS)

    Gampe, David; Huber García, Verena; Marzahn, Philip; Ludwig, Ralf

    2017-04-01

    Actual evaporation (Eta) is an essential variable to assess water availability, drought risk and food security, among others. Measurements of Eta are however limited to a small footprint, hampering a spatially explicit analysis and application and are very often not available at all. To overcome the problem of data scarcity, Eta can be assessed by various remote sensing approaches such as the Triangle Method (Jiang & Islam, 1999). Here, Eta is estimated by using the Normalized Difference Vegetation Index (NDVI) and land surface temperature (LST). In this study, the R-package 'TriangleMethod' was compiled to efficiently perform the calculations of NDVI and processing LST to finally derive Eta from the applied data set. The package contains all necessary calculation steps and allows easy processing of a large data base of remote sensing images. By default, the parameterization for the Landsat TM and ETM+ sensors are implemented, however, the algorithms can be easily extended to additional sensors. The auxiliary variables required to estimate Eta with this method, such as elevation, solar radiation and air temperature at the overpassing time, can be processed as gridded information to allow for a better representation of the study area. The package was successfully applied in various studies in Spain, Palestine, Costa Rica and Canada.

  5. A Principled Approach to the Specification of System Architectures for Space Missions

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  6. [FMEA applied to the radiotherapy patient care process].

    PubMed

    Meyrieux, C; Garcia, R; Pourel, N; Mège, A; Bodez, V

    2012-10-01

    Failure modes and effects analysis (FMEA), is a risk analysis method used at the Radiotherapy Department of Institute Sainte-Catherine as part of a strategy seeking to continuously improve the quality and security of treatments. The method comprises several steps: definition of main processes; for each of them, description for every step of prescription, treatment preparation, treatment application; identification of the possible risks, their consequences, their origins; research of existing safety elements which may avoid these risks; grading of risks to assign a criticality score resulting in a numerical organisation of the risks. Finally, the impact of proposed corrective actions was then estimated by a new grading round. For each process studied, a detailed map of the risks was obtained, facilitating the identification of priority actions to be undertaken. For example, we obtain five steps in patient treatment planning with an unacceptable level of risk, 62 a level of moderate risk and 31 an acceptable level of risk. The FMEA method, used in the industrial domain and applied here to health care, is an effective tool for the management of risks in patient care. However, the time and training requirements necessary to implement this method should not be underestimated. Copyright © 2012 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  7. Utility of Computational Methods to Identify the Apoptosis Machinery in Unicellular Eukaryotes

    PubMed Central

    Durand, Pierre Marcel; Coetzer, Theresa Louise

    2008-01-01

    Apoptosis is the phenotypic result of an active, regulated process of self-destruction. Following various cellular insults, apoptosis has been demonstrated in numerous unicellular eukaryotes, but very little is known about the genes and proteins that initiate and execute this process in this group of organisms. A bioinformatic approach presents an array of powerful methods to direct investigators in the identification of the apoptosis machinery in protozoans. In this review, we discuss some of the available computational methods and illustrate how they may be applied using the identification of a Plasmodium falciparum metacaspase gene as an example. PMID:19812769

  8. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    PubMed

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  10. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  11. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  12. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    PubMed

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  13. Evaluating Payments for Environmental Services: Methodological Challenges

    PubMed Central

    2016-01-01

    Over the last fifteen years, Payments for Environmental Services (PES) schemes have become very popular environmental policy instruments, but the academic literature has begun to question their additionality. The literature attempts to estimate the causal effect of these programs by applying impact evaluation (IE) techniques. However, PES programs are complex instruments and IE methods cannot be directly applied without adjustments. Based on a systematic review of the literature, this article proposes a framework for the methodological process of designing an IE for PES schemes. It revises and discusses the methodological choices at each step of the process and proposes guidelines for practitioners. PMID:26910850

  14. Teaching to Think: Applying the Socratic Method outside the Law School Setting

    ERIC Educational Resources Information Center

    Peterson, Evan

    2009-01-01

    An active learning process has the potential to provide educational benefits above-and-beyond what they might receive from more traditional, passive approaches. The Socratic Method is a unique approach to passive learning that facilitates critical thinking, open-mindedness, and teamwork. By imposing a series of guided questions to students, an…

  15. Neurolinguistic Foundations to Methods of Teaching a Second Language.

    ERIC Educational Resources Information Center

    Walsh, Terrence M.; Diller, Karl C.

    Applied linguistic theory is examined in light of neuroscientific knowledge, especially in regard to the structure and function of the cerebral cortex, in order to illuminate the process and methods of teaching or learning language. Wernicke's Area and Broca's Area are parts of the brain that have been associated with language function.…

  16. RELIABILITY STUDY OF THE U.S. EPA'S METHODS 101A - DETERMINATION OF PARTICULATE AND GASEOUS MERCURY EMISSIONS

    EPA Science Inventory

    EPA Method 101A applies to the determination of particulate and gaseous mercury missions from sewage sludge incinerators and other sources. oncern has been expressed hat ammonia or hydrogen chloride (HCl) when present in the emissions, interferes in the analytical processes and p...

  17. The Key Factors of an Active Learning Method in a Microprocessors Course

    ERIC Educational Resources Information Center

    Carpeno, A.; Arriaga, J.; Corredor, J.; Hernandez, J.

    2011-01-01

    The creation of the European Higher Education Area (EHEA) is promoting a change toward a new model of education focused on the student. It is impelling methodological innovation processes in many European universities, leading more teachers to apply methods based on active and cooperative learning in their classrooms. However, the successful…

  18. Quantitative Evaluation of Management Courses: Part 1

    ERIC Educational Resources Information Center

    Cunningham, Cyril

    1973-01-01

    The author describes how he developed a method of evaluating and comparing management courses of different types and lengths by applying an ordinal system of relative values using a process of transmutation. (MS)

  19. Lean in Air Permitting Guide

    EPA Pesticide Factsheets

    The Lean in Air Permitting Guide is designed to help air program managers at public agencies better understand the potential value and results that can be achieved by applying Lean improvement methods to air permitting processes.

  20. Development of a perfusion reversed-phase high performance liquid chromatography method for the characterisation of maize products using multivariate analysis.

    PubMed

    Rodriguez-Nogales, J M; Garcia, M C; Marina, M L

    2006-02-03

    A perfusion reversed-phase high performance liquid chromatography (RP-HPLC) method has been designed to allow rapid (3.4 min) separations of maize proteins with high resolution. Several factors, such as extraction conditions, temperature, detection wavelength and type and concentration of ion-pairing agent were optimised. A fine optimisation of the gradient elution was also performed by applying experimental design. Commercial maize products for human consumption (flours, precocked flours, fried snacks and extruded snacks) were characterised for the first time by perfusion RP-HPLC and their chromatographic profiles allowed a differentiation among products relating the different technological process used for their preparation. Furthermore, applying discriminant analysis makes it possible to group the samples according with the technological process suffered by maize products, obtaining a good prediction in 92% of the samples.

  1. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  2. Elucidating rhizosphere processes by mass spectrometry - A review.

    PubMed

    Rugova, Ariana; Puschenreiter, Markus; Koellensperger, Gunda; Hann, Stephan

    2017-03-01

    The presented review discusses state-of-the-art mass spectrometric methods, which have been developed and applied for investigation of chemical processes in the soil-root interface, the so-called rhizosphere. Rhizosphere soil's physical and chemical characteristics are to a great extent influenced by a complex mixture of compounds released from plant roots, i.e. root exudates, which have a high impact on nutrient and trace element dynamics in the soil-root interface as well as on microbial activities or soil physico-chemical characteristics. Chemical characterization as well as accurate quantification of the compounds present in the rhizosphere is a major prerequisite for a better understanding of rhizosphere processes and requires the development and application of advanced sampling procedures in combination with highly selective and sensitive analytical techniques. During the last years, targeted and non-targeted mass spectrometry-based methods have emerged and their combination with specific separation methods for various elements and compounds of a wide polarity range have been successfully applied in several studies. With this review we critically discuss the work that has been conducted within the last decade in the context of rhizosphere research and elemental or molecular mass spectrometry emphasizing different separation techniques as GC, LC and CE. Moreover, selected applications such as metal detoxification or nutrient acquisition will be discussed regarding the mass spectrometric techniques applied in studies of root exudates in plant-bacteria interactions. Additionally, a more recent isotope probing technique as novel mass spectrometry based application is highlighted. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. MVL spatiotemporal analysis for model intercomparison in EPS: application to the DEMETER multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Fernández, J.; Primo, C.; Cofiño, A. S.; Gutiérrez, J. M.; Rodríguez, M. A.

    2009-08-01

    In a recent paper, Gutiérrez et al. (Nonlinear Process Geophys 15(1):109-114, 2008) introduced a new characterization of spatiotemporal error growth—the so called mean-variance logarithmic (MVL) diagram—and applied it to study ensemble prediction systems (EPS); in particular, they analyzed single-model ensembles obtained by perturbing the initial conditions. In the present work, the MVL diagram is applied to multi-model ensembles analyzing also the effect of model formulation differences. To this aim, the MVL diagram is systematically applied to the multi-model ensemble produced in the EU-funded DEMETER project. It is shown that the shared building blocks (atmospheric and ocean components) impose similar dynamics among different models and, thus, contribute to poorly sampling the model formulation uncertainty. This dynamical similarity should be taken into account, at least as a pre-screening process, before applying any objective weighting method.

  4. Developing a model for the adequate description of electronic communication in hospitals.

    PubMed

    Saboor, Samrend; Ammenwerth, Elske

    2011-01-01

    Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.

  5. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  6. Multi-crease Self-folding by Global Heating.

    PubMed

    Miyashita, Shuhei; Onal, Cagdas D; Rus, Daniela

    2015-01-01

    This study demonstrates a new approach to autonomous folding for the body of a 3D robot from a 2D sheet, using heat. We approach this challenge by folding a 0.27-mm sheetlike material into a structure. We utilize the thermal deformation of a contractive sheet sandwiched by rigid structural layers. During this baking process, the heat applied on the entire sheet induces contraction of the contracting layer and thus forms an instructed bend in the sheet. To attain the targeted folding angles, the V-fold spans method is used. The targeted angle θout can be kinematically encoded into crease geometry. The realization of this angle in the folded structure can be approximately controlled by a contraction angle θin. The process is non-reversible, is reliable, and is relatively fast. Our method can be applied simultaneously to all the folds in multi-crease origami structures. We demonstrate the use of this method to create a lightweight mobile robot.

  7. Statistical Model Selection for TID Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, R.; Gorelick, J. L.; McClure, S.

    2010-01-01

    Radiation Hardness Assurance (RHA) methodologies against Total Ionizing Dose (TID) degradation impose rigorous statistical treatments for data from a part's Radiation Lot Acceptance Test (RLAT) and/or its historical performance. However, no similar methods exist for using "similarity" data - that is, data for similar parts fabricated in the same process as the part under qualification. This is despite the greater difficulty and potential risk in interpreting of similarity data. In this work, we develop methods to disentangle part-to-part, lot-to-lot and part-type-to-part-type variation. The methods we develop apply not just for qualification decisions, but also for quality control and detection of process changes and other "out-of-family" behavior. We begin by discussing the data used in ·the study and the challenges of developing a statistic providing a meaningful measure of degradation across multiple part types, each with its own performance specifications. We then develop analysis techniques and apply them to the different data sets.

  8. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    NASA Astrophysics Data System (ADS)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  9. Non-invasive body temperature measurement of wild chimpanzees using fecal temperature decline.

    PubMed

    Jensen, Siv Aina; Mundry, Roger; Nunn, Charles L; Boesch, Christophe; Leendertz, Fabian H

    2009-04-01

    New methods are required to increase our understanding of pathologic processes in wild mammals. We developed a noninvasive field method to estimate the body temperature of wild living chimpanzees habituated to humans, based on statistically fitting temperature decline of feces after defecation. The method was established with the use of control measures of human rectal temperature and subsequent changes in fecal temperature over time. The method was then applied to temperature data collected from wild chimpanzee feces. In humans, we found good correspondence between the temperature estimated by the method and the actual rectal temperature that was measured (maximum deviation 0.22 C). The method was successfully applied and the average estimated temperature of the chimpanzees was 37.2 C. This simple-to-use field method reliably estimates the body temperature of wild chimpanzees and probably also other large mammals.

  10. Pre-release plastic packaging of MEMS and IMEMS devices

    DOEpatents

    Peterson, Kenneth A.; Conley, William R.

    2002-01-01

    A method is disclosed for pre-release plastic packaging of MEMS and IMEMS devices. The method can include encapsulating the MEMS device in a transfer molded plastic package. Next, a perforation can be made in the package to provide access to the MEMS elements. The non-ablative material removal process can include wet etching, dry etching, mechanical machining, water jet cutting, and ultrasonic machining, or any combination thereof. Finally, the MEMS elements can be released by using either a wet etching or dry plasma etching process. The MEMS elements can be protected with a parylene protective coating. After releasing the MEMS elements, an anti-stiction coating can be applied. The perforating step can be applied to both sides of the device or package. A cover lid can be attached to the face of the package after releasing any MEMS elements. The cover lid can include a window for providing optical access. The method can be applied to any plastic packaged microelectronic device that requires access to the environment, including chemical, pressure, or temperature-sensitive microsensors; CCD chips, photocells, laser diodes, VCSEL's, and UV-EPROMS. The present method places the high-risk packaging steps ahead of the release of the fragile portions of the device. It also provides protection for the die in shipment between the molding house and the house that will release the MEMS elements and subsequently treat the surfaces.

  11. Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2D QDFT

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.

    2017-03-01

    2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.

  12. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  13. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  14. Newly invented biobased materials from low-carbon, diverted waste fibers: research methods, testing, and full-scale application in a case study structure

    Treesearch

    Julee A Herdt; John Hunt; Kellen Schauermann

    2016-01-01

    This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authors’ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...

  15. Date canning: a new approach for the long time preservation of date.

    PubMed

    Homayouni, Aziz; Azizi, Aslan; Keshtiban, Ata Khodavirdivand; Amini, Amir; Eslami, Ahad

    2015-04-01

    Dramatic growth in date (Phoenix dactylifera L.) production, makes it clear to apply proper methods to preserve this nutritious fruit for a long time. Numerous methods have been used to gain this goal in recent years that can be classified into non-thermal (fumigation, ozonation, irradiation, and packaging) and thermal (heat treatment, cold storage, dehydration, jam etc.) processing methods. In this paper these methods were reviewed and novel methods for date preservation were presented.

  16. Six Lessons We Learned Applying Six Sigma

    NASA Technical Reports Server (NTRS)

    Carroll, Napoleon; Casleton, Christa H.

    2005-01-01

    As Chief Financial Officer of Kennedy Space Center (KSC), I'm not only responsible for financial planning and accounting but also for building strong partnerships with the CFO customers, who include Space Shuttle and International Space Station operations as well all who manage the KSC Spaceport. My never ending goal is to design, manage and continuously improve our core business processes so that they deliver world class products and services to the CFO's customers. I became interested in Six Sigma as Christa Casleton (KSC's first Six Sigma Black belt) applied Six Sigma tools and methods to our Plan and Account for Travel Costs Process. Her analysis was fresh, innovative and thorough but, even more impressive, was her approach to ensure ongoing, continuous process improvement. Encouraged by the results, I launched two more process improvement initiatives aimed at applying Six Sigma principles to CFO processes that not only touch most of my employees but also have direct customer impact. As many of you know, Six Sigma is a measurement scale that compares the output of a process with customer requirements. That's straight forward, but demands that you not only understand your processes but also know your products and the critical customer requirements. The objective is to isolate and eliminate the causes of process variation so that the customer sees consistently high quality.

  17. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  18. Wind Plant Performance Prediction (WP3) Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, Anna

    The methods for analysis of operational wind plant data are highly variable across the wind industry, leading to high uncertainties in the validation and bias-correction of preconstruction energy estimation methods. Lack of credibility in the preconstruction energy estimates leads to significant impacts on project financing and therefore the final levelized cost of energy for the plant. In this work, the variation in the evaluation of a wind plant's operational energy production as a result of variations in the processing methods applied to the operational data is examined. Preliminary results indicate that selection of the filters applied to the data andmore » the filter parameters can have significant impacts in the final computed assessment metrics.« less

  19. Effects of processing method and age of leaves on phytochemical profiles and bioactivity of coffee leaves.

    PubMed

    Chen, Xiu-Min; Ma, Zhili; Kitts, David D

    2018-05-30

    The use of coffee leaves as a novel beverage has recently received consumer interest, but there is little known about how processing methods affect the quality of final product. We applied tea (white, green, oolong and black tea) processing methods to process coffee leaves and then investigated their effects on phytochemical composition and related antioxidant and anti-inflammatory properties. Using Japanese-style green tea-processing of young leaves, and black tea-processing of mature (BTP-M) coffee leaves, produced contrasting effects on phenolic content, and associated antioxidant activity and nitric oxide (NO) inhibitory activity in IFN-γ and LPS induced Raw 264.7 cells. BTP-M coffee leaves also had significantly (P < .05) higher responses in NO, iNOS, COX-2, as well as a number of cytokines, in non-induced Raw 264.7. Our findings show that the age of coffee leaves and the type of processing method affect phytochemical profiles sufficiently to produce characteristic antioxidant and anti-inflammatory activities. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. The Fiber Grating Sensors Applied in the Deformation Measurement of Shipborne Antenna Basement

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Chen, Jiahong; Zhao, Wenhua

    2016-02-01

    The optical fiber grating sensor is a novel fibre-optical passive device, its reflecting optical spectrum is linearly related with strain. It is broadly applied in the structural monitoring industry. Shipborne antenna basement is the basic supporting structure for the radar tracking movement. The bending deformation of the basement caused by ship attitude changing influences the antenna tracking precision, According to the structure of shipborne antenna basement, a distributed strain testing method based on the fibre grating sensor is approved to measure the bending deformation under the bending force. The strain-angle model is built. The regularity of the strain distribution is obtained. The finite element method is used to analyze the deformation of the antenna basement. The measuring experiment on the contractible basement mould is carried out to verify the availability of the method. The result of the experiment proves that the model is effective to apply in the deformation measurement. It provides an optimized method for the distribution of the fiber grating sensor in the actual measuring process.

  1. Minimum depth of investigation for grounded-wire TEM due to self-transients

    NASA Astrophysics Data System (ADS)

    Zhou, Nannan; Xue, Guoqiang

    2018-05-01

    The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.

  2. A method of increasing the depth of the plastically deformed layer in the roller burnishing process

    NASA Astrophysics Data System (ADS)

    Kowalik, Marek; Trzepiecinski, Tomasz

    2018-05-01

    The subject of this paper is an analysis of the determination of the depth of the plastically deformed layer in the process of roller burnishing a shaft using a newly developed method in which a braking moment is applied to the roller. It is possible to increase the depth of the plastically deformed layer by applying the braking moment to the roller during the burnishing process. The theoretical considerations presented are based on the Hertz-Bielayev and Huber-Mises theories and permit the calculation of the depth of plastic deformation of the top layer of the burnished shaft. The theoretical analysis has been verified experimentally and using numerical calculations based on the finite element method using the Msc.MARC program. Experimental tests were carried out on ring-shaped samples made of C45 carbon steel. The samples were burnished at different values of roller force and different values of braking moment. A significant increase was found in the depth of the plastically deformed surface layer of roller burnished shafts. Usage of the phenomenon of strain hardening of steel allows the technology presented here to increase the fatigue life of the shafts.

  3. Improving by postoxidation of corrosion resistance of plasma nitrocarburized AISI 316 stainless steels

    NASA Astrophysics Data System (ADS)

    Yenilmez, A.; Karakan, M.; Çelik, İ.

    2017-01-01

    Austenitic stainless steels are widely used in several industries such as chemistry, food, health and space due to their perfect corrosion resistance. However, in addition to corrosion resistance, the mechanic and tribological features such as wear resistance and friction are required to be good in the production and engineering of this type of machines, equipment and mechanic parts. In this study, ferritic (FNC) and austenitic (ANC) nitrocarburizing were applied on AISI 316 stainless steel specimens with perfect corrosion resistance in the plasma environment at the definite time (4 h) and constant gas mixture atmosphere. In order to recover corrosion resistance which was deteriorated after nitrocarburizing again, plasma postoxidation process (45 min) was applied. After the duplex treatment, the specimens' structural analyses with XRD and SEM methods, corrosion analysis with polarization method and surface hardness with microhardness method were examined. At the end of the studies, AISI 316 surface hardness of stainless steel increased with nitrocarburizing process, but the corrosion resistance was deteriorated with FNC (570 °C) and ANC (670 °C) nitrocarburizing. With the following of the postoxidation treatment, it was detected that the corrosion resistance became better and it approached its value before the process.

  4. Rough case-based reasoning system for continues casting

    NASA Astrophysics Data System (ADS)

    Su, Wenbin; Lei, Zhufeng

    2018-04-01

    The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.

  5. Processing Characteristics and Properties of the Cellular Products Made by Using Special Foaming Agents

    NASA Astrophysics Data System (ADS)

    Garbacz, Tomasz; Dulebova, Ludmila

    2012-12-01

    The paper describes the manufacturing process of extruded products by the cellular extrusion method, and presents specifications of the blowing agents used in the extrusion process as well as process conditions. The process of cellular extrusion of thermoplastic materials is aimed at obtaining cellular shapes and coats with reduced density, presenting no hollows on the surface of extruder product and displaying minimal contraction under concurrent maintenance of properties similar to properties of products extruded by means of the conventional method. In order to obtain cellular structure, the properties of extruded product are modified by applying suitable plastic or inserting auxiliary agents.

  6. Product Quality Improvement Using FMEA for Electric Parking Brake (EPB)

    NASA Astrophysics Data System (ADS)

    Dumitrescu, C. D.; Gruber, G. C.; Tişcă, I. A.

    2016-08-01

    One of the most frequently used methods to improve product quality is complex FMEA. (Failure Modes and Effects Analyses). In the literature various FMEA is known, depending on the mode and depending on the targets; we mention here some of these names: Failure Modes and Effects Analysis Process, or analysis Failure Mode and Effects Reported (FMECA). Whatever option is supported by the work team, the goal of the method is the same: optimize product design activities in research, design processes, implementation of manufacturing processes, optimization of mining product to beneficiaries. According to a market survey conducted on parts suppliers to vehicle manufacturers FMEA method is used in 75%. One purpose of the application is that after the research and product development is considered resolved, any errors which may be detected; another purpose of applying the method is initiating appropriate measures to avoid mistakes. Achieving these two goals leads to a high level distribution in applying, to avoid errors already in the design phase of the product, thereby avoiding the emergence and development of additional costs in later stages of product manufacturing. During application of FMEA method using standardized forms; with their help will establish the initial assemblies of product structure, in which all components will be viewed without error. The work is an application of the method FMEA quality components to optimize the structure of the electrical parking brake (Electric Parching Brake - E.P.B). This is a component attached to the roller system which ensures automotive replacement of conventional mechanical parking brake while ensuring its comfort, functionality, durability and saves space in the passenger compartment. The paper describes the levels at which they appealed in applying FMEA, working arrangements in the 4 distinct levels of analysis, and how to determine the number of risk (Risk Priority Number); the analysis of risk factors and established authors who have imposed measures to reduce / eliminate risk completely exploiting this complex product.

  7. Hue-preserving and saturation-improved color histogram equalization algorithm.

    PubMed

    Song, Ki Sun; Kang, Hee; Kang, Moon Gi

    2016-06-01

    In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.

  8. Bridging the gap between finance and clinical operations with activity-based cost management.

    PubMed

    Storfjell, J L; Jessup, S

    1996-12-01

    Activity-based cost management (ABCM) is an exciting management tool that links financial information with operations. By determining the costs of specific activities and processes, nurse managers accurately determine true costs of services more accurately than traditional cost accounting methods, and then can target processes for improvement and monitor them for change and improvement. The authors describe the ABCM process applied to nursing management situations.

  9. Modern methods of surveyor observations in opencast mining under complex hydrogeological conditions.

    NASA Astrophysics Data System (ADS)

    Usoltseva, L. A.; Lushpei, V. P.; Mursin, VA

    2017-10-01

    The article considers the possibility of linking the modern methods of surveying security of open mining works to improve industrial safety in the Primorsky Territory, as well as their use in the educational process. Industrial Safety in the management of Surface Mining depends largely on the applied assessment methods and methods of stability of pit walls and slopes of dumps in the complex mining and hydro-geological conditions.

  10. Business Process Design Method Based on Business Event Model for Enterprise Information System Integration

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takashi; Komoda, Norihisa

    The traditional business process design methods, in which the usecase is the most typical, have no useful framework to design the activity sequence with. Therefore, the design efficiency and quality vary widely according to the designer’s experience and skill. In this paper, to solve this problem, we propose the business events and their state transition model (a basic business event model) based on the language/action perspective, which is the result in the cognitive science domain. In the business process design, using this model, we decide event occurrence conditions so that every event synchronizes with each other. We also propose the design pattern to decide the event occurrence condition (a business event improvement strategy). Lastly, we apply the business process design method based on the business event model and the business event improvement strategy to the credit card issue process and estimate its effect.

  11. Case-based medical informatics

    PubMed Central

    Pantazi, Stefan V; Arocha, José F; Moehr, Jochen R

    2004-01-01

    Background The "applied" nature distinguishes applied sciences from theoretical sciences. To emphasize this distinction, we begin with a general, meta-level overview of the scientific endeavor. We introduce the knowledge spectrum and four interconnected modalities of knowledge. In addition to the traditional differentiation between implicit and explicit knowledge we outline the concepts of general and individual knowledge. We connect general knowledge with the "frame problem," a fundamental issue of artificial intelligence, and individual knowledge with another important paradigm of artificial intelligence, case-based reasoning, a method of individual knowledge processing that aims at solving new problems based on the solutions to similar past problems. We outline the fundamental differences between Medical Informatics and theoretical sciences and propose that Medical Informatics research should advance individual knowledge processing (case-based reasoning) and that natural language processing research is an important step towards this goal that may have ethical implications for patient-centered health medicine. Discussion We focus on fundamental aspects of decision-making, which connect human expertise with individual knowledge processing. We continue with a knowledge spectrum perspective on biomedical knowledge and conclude that case-based reasoning is the paradigm that can advance towards personalized healthcare and that can enable the education of patients and providers. We center the discussion on formal methods of knowledge representation around the frame problem. We propose a context-dependent view on the notion of "meaning" and advocate the need for case-based reasoning research and natural language processing. In the context of memory based knowledge processing, pattern recognition, comparison and analogy-making, we conclude that while humans seem to naturally support the case-based reasoning paradigm (memory of past experiences of problem-solving and powerful case matching mechanisms), technical solutions are challenging. Finally, we discuss the major challenges for a technical solution: case record comprehensiveness, organization of information on similarity principles, development of pattern recognition and solving ethical issues. Summary Medical Informatics is an applied science that should be committed to advancing patient-centered medicine through individual knowledge processing. Case-based reasoning is the technical solution that enables a continuous individual knowledge processing and could be applied providing that challenges and ethical issues arising are addressed appropriately. PMID:15533257

  12. Quasi-periodic Pulse Amplitude Modulation in the Accreting Millisecond Pulsar IGR J00291+5934

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bult, Peter; Doesburgh, Marieke van; Klis, Michiel van der

    We introduce a new method for analyzing the aperiodic variability of coherent pulsations in accreting millisecond X-ray pulsars (AMXPs). Our method involves applying a complex frequency correction to the time-domain light curve, allowing for the aperiodic modulation of the pulse amplitude to be robustly extracted in the frequency domain. We discuss the statistical properties of the resulting modulation spectrum and show how it can be correlated with the non-pulsed emission to determine if the periodic and aperiodic variability are coupled processes. Using this method, we study the 598.88 Hz coherent pulsations of the AMXP IGR J00291+5934 as observed with themore » Rossi X-ray Timing Explorer and XMM-Newton . We demonstrate that our method easily confirms the known coupling between the pulsations and a strong 8 mHz quasi-periodic oscillation (QPO) in XMM-Newton observations. Applying our method to the RXTE observations, we further show, for the first time, that the much weaker 20 mHz QPO and its harmonic are also coupled with the pulsations. We discuss the implications of this coupling and indicate how it may be used to extract new information on the underlying accretion process.« less

  13. Interactive design optimization of magnetorheological-brake actuators using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Erol, Ozan; Gurocak, Hakan

    2011-10-01

    This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.

  14. Quasi-Periodic Pulse Amplitude Modulation in the Accreting Millisecond Pulsar IGR J00291+5934

    NASA Technical Reports Server (NTRS)

    Bult, Peter; van Doesburgh, Marieke; van der Klis, Michiel

    2017-01-01

    We introduce a new method for analyzing the a periodic variability of coherent pulsations in accreting millisecond X-ray pulsars (AMXPs). Our method involves applying a complex frequency correction to the time-domain lightcurve, allowing for the aperiodic modulation of the pulse amplitude to be robustly extracted in the frequency domain. We discuss the statistical properties of the resulting modulation spectrum and show how it can be correlated with the non-pulsed emission to determine if the periodic and a periodic variability are coupled processes. Using this method, we study the 598.88 Hz coherent pulsations of the AMXP IGR J00291+5934 as observed with the Rossi X-ray Timing Explorer and XMM-Newton. We demonstrate that our method easily confirms the known coupling between the pulsations and a strong 8 mHz quasi-periodic oscillation (QPO) in XMM-Newton observations. Applying our method to the RXTE observations, we further show, for the first time, that the much weaker 20 mHz QPO and its harmonic are also coupled with the pulsations. We discuss the implications of this coupling and indicate how it may be used to extract new information on the underlying accretion process.

  15. Preferred color correction for digital LCD TVs

    NASA Astrophysics Data System (ADS)

    Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.

  16. Development of the GPM Observatory Thermal Vacuum Test Model

    NASA Technical Reports Server (NTRS)

    Yang, Kan; Peabody, Hume

    2012-01-01

    A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.

  17. Terrestrial laser scanning for biomass assessment and tree reconstruction: improved processing efficiency

    NASA Astrophysics Data System (ADS)

    Alboabidallah, Ahmed; Martin, John; Lavender, Samantha; Abbott, Victor

    2017-09-01

    Terrestrial Laser Scanning (TLS) processing for biomass mapping involves large data volumes, and often includes relatively slow 3D object fitting steps that increase the processing time. This study aimed to test new features that can speed up the overall processing time. A new type of 3D voxel is used, where the horizontal layers are parallel to the Digital Terrain Model. This voxel type allows procedures to extract tree diameters using just one layer, but still gives direct tree-height estimations. Layer intersection is used to emphasize the trunks as upright standing objects, which are detected in the spatially segmented intersection of the breast-height voxels and then extended upwards and downwards. The diameters were calculated by fitting elliptical cylinders to the laser points in the detected trunk segments. Non-trunk segments, used in sub-tree- structures, were found using the parent-child relationships between successive layers. The branches were reconstructed by skeletonizing each sub-tree branch, and the biomass was distributed statistically amongst the weighted skeletons. The procedure was applied to nine plots within the UK. The average correlation coefficients between reconstructed and directly measured tree diameters, heights and branches were R2 = 0.92, 0.97 and 0.59 compared to 0.91, 0.95, and 0.63 when cylindrical fitting was used. The average time to apply the method reduced from 5hrs:18mins per plot, for the conventional methods, to 2hrs:24mins when the same hardware and software libraries were used with the 3D voxels. These results indicate that this 3D voxel method can produce, much more quickly, results of a similar accuracy that would improve efficiency if applied to projects with large volume TLS datasets.

  18. EMI / EMC Design for Class D Payloads (Resource Prospector / NIRVSS)

    NASA Technical Reports Server (NTRS)

    Forgione, Josh; Benton, Joshua Eric; Thompson, Sarah; Colaprete, Anthony

    2015-01-01

    EMI/EMC techniques are applied to a Class D instrument (NIRVSS) to achieve low noise performance and reduce risk of EMI/EMC testing failures and/or issues during system integration and test. Basic techniques are not terribly expensive or complex, but do require close coordination between electrical and mechanical staff early in the design process. Low-cost methods to test subsystems on the bench without renting an EMI chamber are discussed. This method was applied to the NIRVSS instrument and achieved improvements up to 59dB on conducted emissions measurements between hardware revisions.

  19. Evaluating Change Processes: Assessing Extent of Implementation (Constructs, Methods and Implications)

    ERIC Educational Resources Information Center

    Hall, Gene E.

    2013-01-01

    Purpose: In far too many cases the initiatives to change schools by introducing new programs, processes and reforms has not resulted in obtainment of the desired outcomes. A major reason for limited outcomes suggested in this paper is that there has been a failure to learn from and apply constructs and measures related to understanding,…

  20. Transfer of Expertise: An Eye Tracking and Think Aloud Study Using Dynamic Medical Visualizations

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Seppanen, Marko

    2013-01-01

    Expertise research has produced mixed results regarding the problem of transfer of expertise. Is expert performance context-bound or can the underlying processes be applied to more general situations? The present study tests whether expert performance and its underlying processes transfer to novel tasks within a domain. A mixed method study using…

  1. The Use of Eye Movements in the Study of Multimedia Learning

    ERIC Educational Resources Information Center

    Hyona, Jukka

    2010-01-01

    This commentary focuses on the use of the eye-tracking methodology to study cognitive processes during multimedia learning. First, some general remarks are made about how the method is applied to investigate visual information processing, followed by a reflection on the eye movement measures employed in the studies published in this special issue.…

  2. Application of binomial and multinomial probability statistics to the sampling design process of a global grain tracing and recall system

    USDA-ARS?s Scientific Manuscript database

    Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...

  3. Health-Related Intensity Profiles of Physical Education Classes at Different Phases of the Teaching/Learning Process

    ERIC Educational Resources Information Center

    Bronikowski, Michal; Bronikowska, Malgorzata; Kantanista, Adam; Ciekot, Monika; Laudanska-Krzeminska, Ida; Szwed, Szymon

    2009-01-01

    Study aim: To assess the intensities of three types of physical education (PE) classes corresponding to the phases of the teaching/learning process: Type 1--acquiring and developing skills, Type 2--selecting and applying skills, tactics and compositional principles and Type 3--evaluating and improving performance skills. Material and methods: A…

  4. Students' Self-Reflections on Their Personality Scores Applied to the Processes of Learning and Achievement

    ERIC Educational Resources Information Center

    Mcilroy, David; Todd, Valerie; Palmer-Conn, Sue; Poole, Karen

    2016-01-01

    Research on personality in the educational context has primarily focused on quantitative approaches, so this study used a mixed methods approach to capture the boarder aspects of students' learning processes. Goals were to ensure that student responses were reliable and normal (quantitative data), and to examine qualitative reflections on…

  5. Meta-analysis using Dirichlet process.

    PubMed

    Muthukumarana, Saman; Tiwari, Ram C

    2016-02-01

    This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.

  6. Monitoring of waste disposal in deep geological formations

    NASA Astrophysics Data System (ADS)

    German, V.; Mansurov, V.

    2003-04-01

    In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.

  7. Unsupervised Feature Learning With Winner-Takes-All Based STDP

    PubMed Central

    Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.

    2018-01-01

    We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961

  8. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  9. Proceedings ICASS 2017

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Schaaf, Peter

    2018-07-01

    This special issue of the high impact international peer reviewed journal Applied Surface Science represents the proceedings of the 2nd International Conference on Applied Surface Science ICASS held 12-16 June 2017 in Dalian China. The conference provided a forum for researchers in all areas of applied surface science to present their work. The main topics of the conference are in line with the most popular areas of research reported in Applied Surface Science. Thus, this issue includes current research on the role and use of surfaces in chemical and physical processes, related to catalysis, electrochemistry, surface engineering and functionalization, biointerfaces, semiconductors, 2D-layered materials, surface nanotechnology, energy, new/functional materials and nanotechnology. Also the various techniques and characterization methods will be discussed. Hence, scientific research on the atomic and molecular level of material properties investigated with specific surface analytical techniques and/or computational methods is essential for any further progress in these fields.

  10. Comparison of groundwater recharge estimation techniques in an alluvial aquifer system with an intermittent/ephemeral stream (Queensland, Australia)

    NASA Astrophysics Data System (ADS)

    King, Adam C.; Raiber, Matthias; Cox, Malcolm E.; Cendón, Dioni I.

    2017-09-01

    This study demonstrates the importance of the conceptual hydrogeological model for the estimation of groundwater recharge rates in an alluvial system interconnected with an ephemeral or intermittent stream in south-east Queensland, Australia. The losing/gaining condition of these streams is typically subject to temporal and spatial variability, and knowledge of these hydrological processes is critical for the interpretation of recharge estimates. Recharge rate estimates of 76-182 mm/year were determined using the water budget method. The water budget method provides useful broad approximations of recharge and discharge fluxes. The chloride mass balance (CMB) method and the tritium method were used on 17 and 13 sites respectively, yielding recharge rates of 1-43 mm/year (CMB) and 4-553 mm/year (tritium method). However, the conceptual hydrogeological model confirms that the results from the CMB method at some sites are not applicable in this setting because of overland flow and channel leakage. The tritium method was appropriate here and could be applied to other alluvial systems, provided that channel leakage and diffuse infiltration of rainfall can be accurately estimated. The water-table fluctuation (WTF) method was also applied to data from 16 bores; recharge estimates ranged from 0 to 721 mm/year. The WTF method was not suitable where bank storage processes occurred.

  11. A detailed protocol for chromatin immunoprecipitation in the yeast Saccharomyces cerevisiae.

    PubMed

    Grably, Melanie; Engelberg, David

    2010-01-01

    Critical cellular processes such as DNA replication, DNA damage repair, and transcription are mediated and regulated by DNA-binding proteins. Many efforts have been invested therefore in developing methods that monitor the dynamics of protein-DNA association. As older techniques such as DNA footprinting, and electrophoretic mobility shift assays (EMSA) could be applied mostly in vitro, the development of the chromatin immunoprecipitation (ChIP) method, which allows quantitative measurement of protein-bound DNA most accurately in vivo, revolutionized our capabilities of understanding the mechanisms underlying the aforementioned processes. Furthermore, this powerful tool could be applied at the genomic-scale providing a global picture of the protein-DNA complexes at the entire genome.The procedure is conceptually simple; involves rapid crosslinking of proteins to DNA by the addition of formaldehyde to the culture, shearing the DNA and immunoprecipitating the protein of interest while covalently bound to its DNA targets. Following decrosslinking, DNA that was coimmunoprecipitated could be amplified by PCR or could serve as a probe of a genomic microarray to identify all DNA fragments that were bound to the protein.Although simple in principle, the method is not trivial to implement and the results might be misleading if proper controls are not included in the experiment. In this chapter, we provide therefore a highly detailed protocol of ChIP assay as is applied successfully in our laboratory. We pay special attention to describe every small detail, in order that any investigator could readily and successfully apply this important and powerful technology.

  12. Method for producing solid or hollow spherical particles of chosen chemical composition and of uniform size

    DOEpatents

    Hendricks, Charles D.

    1988-01-01

    A method is provided for producing commercially large quantities of high melting temperature solid or hollow spherical particles of a predetermined chemical composition and having a uniform and controlled size distribution. An end (18, 50, 90) of a solid or hollow rod (20, 48, 88) of the material is rendered molten by a laser beam (14, 44, 82). Because of this, there is no possibility of the molten rod material becoming contaminated with extraneous material. In various aspects of the invention, an electric field is applied to the molten rod end (18, 90), and/or the molten rod end (50, 90) is vibrated. In a further aspect of the invention, a high-frequency component is added to the electric field applied to the molten end of the rod (90). By controlling the internal pressure of the rod, the rate at which the rod is introduced into the laser beam, the environment of the process, the vibration amplitude and frequency of the molten rod end, the electric field intensity applied to the molten rod end, and the frequency and intensity of the component added to the electric field, the uniformity and size distribution of the solid or hollow spherical particles (122) produced by the inventive method is controlled. The polarity of the electric field applied to the molten rod end can be chosen to eliminate backstreaming electrons, which tend to produce run-away heating in the rod, from the process.

  13. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  14. Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Schön; Yemelyanov, Konstantin M.; Pugh, Edward N., Jr.; Engheta, Nader

    2006-09-01

    In forensic science the finger marks left unintentionally by people at a crime scene are referred to as latent fingerprints. Most existing techniques to detect and lift latent fingerprints require application of a certain material directly onto the exhibit. The chemical and physical processing applied to the fingerprint potentially degrades or prevents further forensic testing on the same evidence sample. Many existing methods also have deleterious side effects. We introduce a method to detect and extract latent fingerprint images without applying any powder or chemicals on the object. Our method is based on the optical phenomena of polarization and specular reflection together with the physiology of fingerprint formation. The recovered image quality is comparable to existing methods. In some cases, such as the sticky side of tape, our method shows unique advantages.

  15. Method and system for determining precursors of health abnormalities from processing medical records

    DOEpatents

    None, None

    2013-06-25

    Medical reports are converted to document vectors in computing apparatus and sampled by applying a maximum variation sampling function including a fitness function to the document vectors to reduce a number of medical records being processed and to increase the diversity of the medical records being processed. Linguistic phrases are extracted from the medical records and converted to s-grams. A Haar wavelet function is applied to the s-grams over the preselected time interval; and the coefficient results of the Haar wavelet function are examined for patterns representing the likelihood of health abnormalities. This confirms certain s-grams as precursors of the health abnormality and a parameter can be calculated in relation to the occurrence of such a health abnormality.

  16. System configured for applying a modifying agent to a non-equidimensional substrate

    DOEpatents

    Janikowski,; Stuart K. , Argyle; Mark D. , Fox; Robert V. , Propp; W Alan, Toth [Idaho Falls, ID; William J. , Ginosar; Daniel M. , Allen; Charles A. , Miller; David, L [Idaho Falls, ID

    2007-07-10

    The present invention is related to systems and methods for modifying various non-equidimensional substrates with modifying agents. The system comprises a processing chamber configured for passing the non-equidimensional substrate therethrough, wherein the processing chamber is further configured to accept a treatment mixture into the chamber during movement of the non-equidimensional substrate through the processing chamber. The treatment mixture can comprise of the modifying agent in a carrier medium, wherein the carrier medium is selected from the group consisting of a supercritical fluid, a near-critical fluid, a superheated fluid, a superheated liquid, and a liquefied gas. Thus, the modifying agent can be applied to the non-equidimensional substrate upon contact between the treatment mixture and the non-equidimensional substrate.

  17. Applied photo interpretation for airbrush cartography

    NASA Technical Reports Server (NTRS)

    Inge, J. L.; Bridges, P. M.

    1976-01-01

    New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.

  18. System configured for applying a modifying agent to a non-equidimensional substrate

    DOEpatents

    Janikowski, Stuart K.; Toth, William J.; Ginosar, Daniel M.; Allen, Charles A.; Argyle, Mark D.; Fox, Robert V.; Propp, W. Alan; Miller, David L.

    2003-09-23

    The present invention is related to systems and methods for modifying various non-equidimensional substrates with modifying agents. The system comprises a processing chamber configured for passing the non-equidimensional substrate therethrough, wherein the processing chamber is further configured to accept a treatment mixture into the chamber during movement of the non-equidimensional substrate through the processing chamber. The treatment mixture can comprise of the modifying agent in a carrier medium, wherein the carrier medium is selected from the group consisting of a supercritical fluid, a near-critical fluid, a superheated fluid, a superheated liquid, and a liquefied gas. Thus, the modifying agent can be applied to the non-equidimensional substrate upon contact between the treatment mixture and the non-equidimensional substrate.

  19. Eddy current characterization of magnetic treatment of nickel 200

    NASA Technical Reports Server (NTRS)

    Chern, E. J.

    1993-01-01

    Eddy current methods have been applied to characterize the effect of magnetic treatments on component service-life extension. Coil impedance measurements were acquired and analyzed on nickel 200 specimens that have been subjected to many mechanical and magnetic engineering processes: annealing, applied strain, magnetic field, shot peening, and magnetic field after peening. Experimental results have demonstrated a functional relationship between coil impedance, resistance and reactance, and specimens subjected to various engineering processes. It has shown that magnetic treatment does induce changes in electromagnetic properties of nickel 200 that then exhibit evidence of stress relief. However, further fundamental studies are necessary for a thorough understanding of the exact mechanism of the magnetic field processing effect on machine-tool service life.

  20. Eddy current characterization of magnetic treatment of materials

    NASA Technical Reports Server (NTRS)

    Chern, E. James

    1992-01-01

    Eddy current impedance measuring methods have been applied to study the effect that magnetically treated materials have on service life extension. Eddy current impedance measurements have been performed on Nickel 200 specimens that have been subjected to many mechanical and magnetic engineering processes: annealing, applied strain, magnetic field, shot peening, and magnetic field after peening. Experimental results have demonstrated a functional relationship between coil impedance, resistance and reactance, and specimens subjected to various engineering processes. It has shown that magnetic treatment does induce changes in a material's electromagnetic properties and does exhibit evidence of stress relief. However, further fundamental studies are necessary for a thorough understanding of the exact mechanism of the magnetic-field processing effect on machine tool service life.

  1. Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2018-01-01

    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.

  2. Numerical techniques for high-throughput reflectance interference biosensing

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin; Ünlü, M. Selim

    2016-06-01

    We have developed a robust and rapid computational method for processing the raw spectral data collected from thin film optical interference biosensors. We have applied this method to Interference Reflectance Imaging Sensor (IRIS) measurements and observed a 10,000 fold improvement in processing time, unlocking a variety of clinical and scientific applications. Interference biosensors have advantages over similar technologies in certain applications, for example highly multiplexed measurements of molecular kinetics. However, processing raw IRIS data into useful measurements has been prohibitively time consuming for high-throughput studies. Here we describe the implementation of a lookup table (LUT) technique that provides accurate results in far less time than naive methods. We also discuss an additional benefit that the LUT method can be used with a wider range of interference layer thickness and experimental configurations that are incompatible with methods that require fitting the spectral response.

  3. Gaussian processes: a method for automatic QSAR modeling of ADME properties.

    PubMed

    Obrezanova, Olga; Csanyi, Gabor; Gola, Joelle M R; Segall, Matthew D

    2007-01-01

    In this article, we discuss the application of the Gaussian Process method for the prediction of absorption, distribution, metabolism, and excretion (ADME) properties. On the basis of a Bayesian probabilistic approach, the method is widely used in the field of machine learning but has rarely been applied in quantitative structure-activity relationship and ADME modeling. The method is suitable for modeling nonlinear relationships, does not require subjective determination of the model parameters, works for a large number of descriptors, and is inherently resistant to overtraining. The performance of Gaussian Processes compares well with and often exceeds that of artificial neural networks. Due to these features, the Gaussian Processes technique is eminently suitable for automatic model generation-one of the demands of modern drug discovery. Here, we describe the basic concept of the method in the context of regression problems and illustrate its application to the modeling of several ADME properties: blood-brain barrier, hERG inhibition, and aqueous solubility at pH 7.4. We also compare Gaussian Processes with other modeling techniques.

  4. Filaments from the galaxy distribution and from the velocity field in the local universe

    NASA Astrophysics Data System (ADS)

    Libeskind, Noam I.; Tempel, Elmo; Hoffman, Yehuda; Tully, R. Brent; Courtois, Hélène

    2015-10-01

    The cosmic web that characterizes the large-scale structure of the Universe can be quantified by a variety of methods. For example, large redshift surveys can be used in combination with point process algorithms to extract long curvilinear filaments in the galaxy distribution. Alternatively, given a full 3D reconstruction of the velocity field, kinematic techniques can be used to decompose the web into voids, sheets, filaments and knots. In this Letter, we look at how two such algorithms - the Bisous model and the velocity shear web - compare with each other in the local Universe (within 100 Mpc), finding good agreement. This is both remarkable and comforting, given that the two methods are radically different in ideology and applied to completely independent and different data sets. Unsurprisingly, the methods are in better agreement when applied to unbiased and complete data sets, like cosmological simulations, than when applied to observational samples. We conclude that more observational data is needed to improve on these methods, but that both methods are most likely properly tracing the underlying distribution of matter in the Universe.

  5. The crowding factor method applied to parafoveal vision

    PubMed Central

    Ghahghaei, Saeideh; Walker, Laura

    2016-01-01

    Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170

  6. Research on intelligent machine self-perception method based on LSTM

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Cheng, Tao

    2018-05-01

    In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.

  7. Mutual information estimation reveals global associations between stimuli and biological processes

    PubMed Central

    Suzuki, Taiji; Sugiyama, Masashi; Kanamori, Takafumi; Sese, Jun

    2009-01-01

    Background Although microarray gene expression analysis has become popular, it remains difficult to interpret the biological changes caused by stimuli or variation of conditions. Clustering of genes and associating each group with biological functions are often used methods. However, such methods only detect partial changes within cell processes. Herein, we propose a method for discovering global changes within a cell by associating observed conditions of gene expression with gene functions. Results To elucidate the association, we introduce a novel feature selection method called Least-Squares Mutual Information (LSMI), which computes mutual information without density estimaion, and therefore LSMI can detect nonlinear associations within a cell. We demonstrate the effectiveness of LSMI through comparison with existing methods. The results of the application to yeast microarray datasets reveal that non-natural stimuli affect various biological processes, whereas others are no significant relation to specific cell processes. Furthermore, we discover that biological processes can be categorized into four types according to the responses of various stimuli: DNA/RNA metabolism, gene expression, protein metabolism, and protein localization. Conclusion We proposed a novel feature selection method called LSMI, and applied LSMI to mining the association between conditions of yeast and biological processes through microarray datasets. In fact, LSMI allows us to elucidate the global organization of cellular process control. PMID:19208155

  8. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  9. Determination and discrimination of biodiesel fuels by gas chromatographic and chemometric methods

    NASA Astrophysics Data System (ADS)

    Milina, R.; Mustafa, Z.; Bojilov, D.; Dagnon, S.; Moskovkina, M.

    2016-03-01

    Pattern recognition method (PRM) was applied to gas chromatographic (GC) data for a fatty acid methyl esters (FAME) composition of commercial and laboratory synthesized biodiesel fuels from vegetable oils including sunflower, rapeseed, corn and palm oils. Two GC quantitative methods to calculate individual fames were compared: Area % and internal standard. The both methods were applied for analysis of two certified reference materials. The statistical processing of the obtained results demonstrates the accuracy and precision of the two methods and allows them to be compared. For further chemometric investigations of biodiesel fuels by their FAME-profiles any of those methods can be used. PRM results of FAME profiles of samples from different vegetable oils show a successful recognition of biodiesels according to the feedstock. The information obtained can be used for selection of feedstock to produce biodiesels with certain properties, for assessing their interchangeability, for fuel spillage and remedial actions in the environment.

  10. Matrix effects in pesticide multi-residue analysis by liquid chromatography-mass spectrometry.

    PubMed

    Kruve, Anneli; Künnapas, Allan; Herodes, Koit; Leito, Ivo

    2008-04-11

    Three sample preparation methods: Luke method (AOAC 985.22), QuEChERS (quick, easy, cheap, effective, rugged and safe) and matrix solid-phase dispersion (MSPD) were applied to different fruits and vegetables for analysis of 14 pesticide residues by high-performance liquid chromatography with electrospray ionization-mass spectrometry (HPLC/ESI/MS). Matrix effect, recovery and process efficiency of the sample preparation methods applied to different fruits and vegetables were compared. The Luke method was found to produce least matrix effect. On an average the best recoveries were obtained with the QuEChERS method. MSPD gave unsatisfactory recoveries for some basic pesticide residues. Comparison of matrix effects for different apple varieties showed high variability for some residues. It was demonstrated that the amount of co-extracting compounds that cause ionization suppression of aldicarb depends on the apple variety as well as on the sample preparation method employed.

  11. Measurement of gas diffusion coefficient in liquid-saturated porous media using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Song, Yongchen; Hao, Min; Zhao, Yuechao; Zhang, Liang

    2014-12-01

    In this study, the dual-chamber pressure decay method and magnetic resonance imaging (MRI) were used to dynamically visualize the gas diffusion process in liquid-saturated porous media, and the relationship of concentration-distance for gas diffusing into liquid-saturated porous media at different times were obtained by MR images quantitative analysis. A non-iterative finite volume method was successfully applied to calculate the local gas diffusion coefficient in liquid-saturated porous media. The results agreed very well with the conventional pressure decay method, thus it demonstrates that the method was feasible of determining the local diffusion coefficient of gas in liquid-saturated porous media at different times during diffusion process.

  12. Improvement for enhancing effectiveness of universal power system (UPS) continuous testing process

    NASA Astrophysics Data System (ADS)

    Sriratana, Lerdlekha

    2018-01-01

    This experiment aims to enhance the effectiveness of the Universal Power System (UPS) continuous testing process of the Electrical and Electronic Institute by applying work scheduling and time study methods. Initially, the standard time of testing process has not been considered that results of unaccurate testing target and also time wasting has been observed. As monitoring and reducing waste time for improving the efficiency of testing process, Yamazumi chart and job scheduling theory (North West Corner Rule) were applied to develop new work process. After the improvements, the overall efficiency of the process possibly increased from 52.8% to 65.6% or 12.7%. Moreover, the waste time could reduce from 828.3 minutes to 653.6 minutes or 21%, while testing units per batch could increase from 3 to 4 units. Therefore, the number of testing units would increase from 12 units up to 20 units per month that also contribute to increase of net income of UPS testing process by 72%.

  13. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  14. Removal of arsenate from groundwater by electrocoagulation method.

    PubMed

    Ali, Imran; Khan, Tabrez A; Asim, Mohd

    2012-06-01

    Arsenic, a toxic metalloid in drinking water, has become a major threat for human beings and other organisms. In the present work, attempts have been made to remove arsenate from the synthetic as well as natural water of Ballia district, India by electrocoagulation method. Efforts have also been made to optimize the various parameters such as initial arsenate concentration, pH, applied voltage, processing time, and working temperature. Electrocoagulation is a fast, inexpensive, selective, accurate, reproducible, and eco-friendly method for arsenate removal from groundwater. The present paper describes an electrocoagulation method for arsenate removal from groundwater using iron and zinc as anode and cathode, respectively. The maximum removal of arsenate was 98.8% at 2.0 mg L(-1), 7.0, 3.0 V, 10.0 min, and 30°C as arsenate concentration, pH, applied voltage, processing time, and working temperature, respectively. Relative standard deviation, coefficient of determination (r (2)), and confidence limits were varied from 1.50% to 1.59%, 0.9996% to 0.9998%, and 96.0% to 99.0%, respectively. The treated water was clear, colorless, and odorless without any secondary contamination. The developed and validated method was applied for arsenate removal of two samples of groundwater of Ballia district, U.P., India, having 0.563 to 0.805 mg L(-1), arsenate concentrations. The reported method is capable for the removal of arsenate completely (100% removal) from groundwater of Ballia district. There was no change in the groundwater quality after the removal of arsenate. The treated water was safe for drinking, bathing, and recreation purposes. Therefore, this method may be the choice of arsenate removal from natural groundwater.

  15. Mutual information based feature selection for medical image retrieval

    NASA Astrophysics Data System (ADS)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  16. Development of a process-oriented vulnerability concept for water travel time in karst aquifers-case study of Tanour and Rasoun springs catchment area.

    NASA Astrophysics Data System (ADS)

    Hamdan, Ibraheem; Sauter, Martin; Ptak, Thomas; Wiegand, Bettina; Margane, Armin; Toll, Mathias

    2017-04-01

    Key words: Karst aquifer, water travel time, vulnerability assessment, Jordan. The understanding of the groundwater pathways and movement through karst aquifers, and the karst aquifer response to precipitation events especially in the arid to semi-arid areas is fundamental to evaluate pollution risks from point and non-point sources. In spite of the great importance of the karst aquifer for drinking purposes, karst aquifers are highly sensitive to contamination events due to the fast connections between the land-surface and the groundwater (through the karst features) which is makes groundwater quality issues within karst systems very complicated. Within this study, different methods and approaches were developed and applied in order to characterise the karst aquifer system of the Tanour and Rasoun springs (NW-Jordan) and the flow dynamics within the aquifer, and to develop a process-oriented method for vulnerability assessment based on the monitoring of different multi-spatially variable parameters of water travel time in karst aquifer. In general, this study aims to achieve two main objectives: 1. Characterization of the karst aquifer system and flow dynamics. 2. Development of a process-oriented method for vulnerability assessment based on spatially variable parameters of travel time. In order to achieve these aims, different approaches and methods were applied starting from the understanding of the geological and hydrogeological characteristics of the karst aquifer and its vulnerability against pollutants, to using different methods, procedures and monitored parameters in order to determine the water travel time within the aquifer and investigate its response to precipitation event and, finally, with the study of the aquifer response to pollution events. The integrated breakthrough signal obtained from the applied methods and procedures including the using of stable isotopes of oxygen and hydrogen, the monitoring of multi qualitative and quantitative parameters using automated probes and data loggers, and the development of travel time physics-based vulnerability assessment method shows good agreement as an applicable methods to determine the water travel time in karst aquifers, and to investigate its response to precipitation and pollution events.

  17. The image enhancement and region of interest extraction of lobster-eye X-ray dangerous material inspection system

    NASA Astrophysics Data System (ADS)

    Zhan, Qi; Wang, Xin; Mu, Baozhong; Xu, Jie; Xie, Qing; Li, Yaran; Chen, Yifan; He, Yanan

    2016-10-01

    Dangerous materials inspection is an important technique to confirm dangerous materials crimes. It has significant impact on the prohibition of dangerous materials-related crimes and the spread of dangerous materials. Lobster-Eye Optical Imaging System is a kind of dangerous materials detection device which mainly takes advantage of backscatter X-ray. The strength of the system is its applicability to access only one side of an object, and to detect dangerous materials without disturbing the surroundings of the target material. The device uses Compton scattered x-rays to create computerized outlines of suspected objects during security detection process. Due to the grid structure of the bionic object glass, which imitate the eye of a lobster, grids contribute to the main image noise during the imaging process. At the same time, when used to inspect structured or dense materials, the image is plagued by superposition artifacts and limited by attenuation and noise. With the goal of achieving high quality images which could be used for dangerous materials detection and further analysis, we developed effective image process methods applied to the system. The first aspect of the image process is the denoising and enhancing edge contrast process, during the process, we apply deconvolution algorithm to remove the grids and other noises. After image processing, we achieve high signal-to-noise ratio image. The second part is to reconstruct image from low dose X-ray exposure condition. We developed a kind of interpolation method to achieve the goal. The last aspect is the region of interest (ROI) extraction process, which could be used to help identifying dangerous materials mixed with complex backgrounds. The methods demonstrated in the paper have the potential to improve the sensitivity and quality of x-ray backscatter system imaging.

  18. From user needs to system specifications: multi-disciplinary thematic seminars as a collaborative design method for development of health information systems.

    PubMed

    Scandurra, I; Hägglund, M; Koch, S

    2008-08-01

    This paper presents a new multi-disciplinary method for user needs analysis and requirements specification in the context of health information systems based on established theories from the fields of participatory design and computer supported cooperative work (CSCW). Whereas conventional methods imply a separate, sequential needs analysis for each profession, the "multi-disciplinary thematic seminar" (MdTS) method uses a collaborative design process. Application of the method in elderly homecare resulted in prototypes that were well adapted to the intended user groups. Vital information in the points of intersection between different care professions was elicited and a holistic view of the entire care process was obtained. Health informatics-usability specialists and clinical domain experts are necessary to apply the method. Although user needs acquisition can be time-consuming, MdTS was perceived to efficiently identify in-context user needs, and transformed these directly into requirements specifications. Consequently the method was perceived to expedite the entire ICT implementation process.

  19. The application of quality risk management to the bacterial endotoxins test: use of hazard analysis and critical control points.

    PubMed

    Annalaura, Carducci; Giulia, Davini; Stefano, Ceccanti

    2013-01-01

    Risk analysis is widely used in the pharmaceutical industry to manage production processes, validation activities, training, and other activities. Several methods of risk analysis are available (for example, failure mode and effects analysis, fault tree analysis), and one or more should be chosen and adapted to the specific field where they will be applied. Among the methods available, hazard analysis and critical control points (HACCP) is a methodology that has been applied since the 1960s, and whose areas of application have expanded over time from food to the pharmaceutical industry. It can be easily and successfully applied to several processes because its main feature is the identification, assessment, and control of hazards. It can be also integrated with other tools, such as fishbone diagram and flowcharting. The aim of this article is to show how HACCP can be used to manage an analytical process, propose how to conduct the necessary steps, and provide data templates necessary to document and useful to follow current good manufacturing practices. In the quality control process, risk analysis is a useful tool for enhancing the uniformity of technical choices and their documented rationale. Accordingly, it allows for more effective and economical laboratory management, is capable of increasing the reliability of analytical results, and enables auditors and authorities to better understand choices that have been made. The aim of this article is to show how hazard analysis and critical control points can be used to manage bacterial endotoxins testing and other analytical processes in a formal, clear, and detailed manner.

  20. The Development of Online Tutorial Program Design Using Problem-Based Learning in Open Distance Learning System

    ERIC Educational Resources Information Center

    Said, Asnah; Syarif, Edy

    2016-01-01

    This research aimed to evaluate of online tutorial program design by applying problem-based learning Research Methods currently implemented in the system of Open Distance Learning (ODL). The students must take a Research Methods course to prepare themselves for academic writing projects. Problem-based learning basically emphasizes the process of…

  1. The Feasibility of Applying PBL Teaching Method to Surgery Teaching of Chinese Medicine

    ERIC Educational Resources Information Center

    Tang, Qianli; Yu, Yuan; Jiang, Qiuyan; Zhang, Li; Wang, Qingjian; Huang, Mingwei

    2008-01-01

    The traditional classroom teaching mode is based on the content of the subject, takes the teacher as the center and gives priority to classroom instruction. While PBL (Problem Based Learning) teaching method breaches the traditional mode, combining the basic science with clinical practice and covering the process from discussion to self-study to…

  2. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  3. Process for the preparation of organoclays

    DOEpatents

    Chaiko, David J.

    2003-01-01

    A method for preparing organoclays from smectites for use as rheological control agents and in the preparation of nanocomposites. Typically, the clay is dispersed in water, and a substantially monomolecular layer of a water soluble polymer is applied to the surfaces of the clay. A surfactant is also applied to the clay to modify the surface hydrophilic/hydrophobic balance of the clay, and the organoclay is separated out for subsequent use.

  4. A Novel College Network Resource Management Method using Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

  5. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  6. Multiphase Simulated Annealing Based on Boltzmann and Bose-Einstein Distribution Applied to Protein Folding Problem.

    PubMed

    Frausto-Solis, Juan; Liñán-García, Ernesto; Sánchez-Hernández, Juan Paulo; González-Barbosa, J Javier; González-Flores, Carlos; Castilla-Valdez, Guadalupe

    2016-01-01

    A new hybrid Multiphase Simulated Annealing Algorithm using Boltzmann and Bose-Einstein distributions (MPSABBE) is proposed. MPSABBE was designed for solving the Protein Folding Problem (PFP) instances. This new approach has four phases: (i) Multiquenching Phase (MQP), (ii) Boltzmann Annealing Phase (BAP), (iii) Bose-Einstein Annealing Phase (BEAP), and (iv) Dynamical Equilibrium Phase (DEP). BAP and BEAP are simulated annealing searching procedures based on Boltzmann and Bose-Einstein distributions, respectively. DEP is also a simulated annealing search procedure, which is applied at the final temperature of the fourth phase, which can be seen as a second Bose-Einstein phase. MQP is a search process that ranges from extremely high to high temperatures, applying a very fast cooling process, and is not very restrictive to accept new solutions. However, BAP and BEAP range from high to low and from low to very low temperatures, respectively. They are more restrictive for accepting new solutions. DEP uses a particular heuristic to detect the stochastic equilibrium by applying a least squares method during its execution. MPSABBE parameters are tuned with an analytical method, which considers the maximal and minimal deterioration of problem instances. MPSABBE was tested with several instances of PFP, showing that the use of both distributions is better than using only the Boltzmann distribution on the classical SA.

  7. Parameters estimation using the first passage times method in a jump-diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com

    2016-06-02

    The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.

  8. Digital intermediate frequency QAM modulator using parallel processing

    DOEpatents

    Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  9. The efficiency of backward magnetic-pulse processing

    NASA Astrophysics Data System (ADS)

    Kudasov, Yu. B.; Maslov, D. A.; Surdin, O. M.

    2017-01-01

    The dependence of the efficiency of magnetic-pulse processing of materials on the pulsed magnetic-field shape has been studied. It is shown that, by using a pulse train instead of a single pulse in the fast-rising component of a magnetic field applied during the backward forming process, it is possible to increase the specific mechanical impulse transferred to a workpiece and, thus, improve the efficiency of processing. Possible applications of the proposed method to removing dents from car chassis and aircraft parts are considered

  10. Study of Thermal Electrical Modified Etching for Glass and Its Application in Structure Etching

    PubMed Central

    Zhan, Zhan; Li, Wei; Yu, Lingke; Wang, Lingyun; Sun, Daoheng

    2017-01-01

    In this work, an accelerating etching method for glass named thermal electrical modified etching (TEM etching) is investigated. Based on the identification of the effect in anodic bonding, a novel method for glass structure micromachining is proposed using TEM etching. To validate the method, TEM-etched glasses are prepared and their morphology is tested, revealing the feasibility of the new method for micro/nano structure micromachining. Furthermore, two kinds of edge effect in the TEM and etching processes are analyzed. Additionally, a parameter study of TEM etching involving transferred charge, applied pressure, and etching roughness is conducted to evaluate this method. The study shows that TEM etching is a promising manufacture method for glass with low process temperature, three-dimensional self-control ability, and low equipment requirement. PMID:28772521

  11. Assessing and Evaluating Multidisciplinary Translational Teams: A Mixed Methods Approach

    PubMed Central

    Wooten, Kevin C.; Rose, Robert M.; Ostir, Glenn V.; Calhoun, William J.; Ameredes, Bill T.; Brasier, Allan R.

    2014-01-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team type taxonomy. Based on team maturation and scientific progress, teams were designated as: a) early in development, b) traditional, c) process focused, or d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored. PMID:24064432

  12. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  13. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  14. A review of methods for assessment of the rate of gastric emptying in the dog and cat: 1898-2002.

    PubMed

    Wyse, C A; McLellan, J; Dickie, A M; Sutton, D G M; Preston, T; Yam, P S

    2003-01-01

    Gastric emptying is the process by which food is delivered to the small intestine at a rate and in a form that optimizes intestinal absorption of nutrients. The rate of gastric emptying is subject to alteration by physiological, pharmacological, and pathological conditions. Gastric emptying of solids is of greater clinical significance because disordered gastric emptying rarely is detectable in the liquid phase. Imaging techniques have the disadvantage of requiring restraint of the animal and access to expensive equipment. Radiographic methods require administration of test meals that are not similar to food. Scintigraphy is the gold standard method for assessment of gastric emptying but requires administration of a radioisotope. Magnetic resonance imaging has not yet been applied for assessment of gastric emptying in small animals. Ultrasonography is a potentially useful, but subjective, method for assessment of gastric emptying in dogs. Gastric tracer methods require insertion of gastric or intestinal cannulae and are rarely applied outside of the research laboratory. The paracetamol absorption test has been applied for assessment of liquid phase gastric emptying in the dog, but requires IV cannulation. The gastric emptying breath test is a noninvasive method for assessment of gastric emptying that has been applied in dogs and cats. This method can be carried out away from the veterinary hospital, but the effects of physiological and pathological abnormalities on the test are not known. Advances in technology will facilitate the development of reliable methods for assessment of gastric emptying in small animals.

  15. System and methods for determining masking signals for applying empirical mode decomposition (EMD) and for demodulating intrinsic mode functions obtained from application of EMD

    DOEpatents

    Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO

    2011-03-15

    A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.

  16. The dream interview method in addiction recovery. A treatment guide.

    PubMed

    Flowers, L K; Zweben, J E

    1996-01-01

    The Dream Interview Method is a recently developed tool for dream interpretation that can facilitate work on addiction issues at all stages of recovery. This paper describes the method in detail and discusses examples of its application in a group composed of individuals in varying stages of the recovery process. It permits the therapist to accelerate the development of insight, and once the method is learned, it can be applied in self-help formats.

  17. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    NASA Astrophysics Data System (ADS)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  18. Process Consultancy. The Demand for Consultancy on Group Processes in the Open University--Implications of Change (Report No. 59). Process Consultancy within the Open University 1981-1991 (Report No. 61).

    ERIC Educational Resources Information Center

    Nicodemus, Robert

    The two reports combined here provide introductory information on consultancy work at Great Britain's Open University Institution of Educational Technology. The approach at the institution was influenced by the theories and methods developed at the Tavistock Institute of Human Relations and applied to group relations training. It is noted that the…

  19. Global analysis of bacterial transcription factors to predict cellular target processes.

    PubMed

    Doerks, Tobias; Andrade, Miguel A; Lathe, Warren; von Mering, Christian; Bork, Peer

    2004-03-01

    Whole-genome sequences are now available for >100 bacterial species, giving unprecedented power to comparative genomics approaches. We have applied genome-context methods to predict target processes that are regulated by transcription factors (TFs). Of 128 orthologous groups of proteins annotated as TFs, to date, 36 are functionally uncharacterized; in our analysis we predict a probable cellular target process or biochemical pathway for half of these functionally uncharacterized TFs.

  20. FLUORESCENT IN SITU HYBRIDIZATION AND MICROAUTORADIOGRAPHY APPLIED TO ECOPHYSIOLOGY IN SOIL

    EPA Science Inventory

    Soil microbial communities perform many important processes, including nutrient cycling, plant-microorganism interactions, and degradation of xenobiotics. The study of microbial communities, however, has been limited by cultural methods, which may greatly underestimate diversity....

Top