NASA Astrophysics Data System (ADS)
Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang
2010-12-01
A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
Intelligent methods for the process parameter determination of plastic injection molding
NASA Astrophysics Data System (ADS)
Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn
2018-03-01
Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.
System and method for deriving a process-based specification
NASA Technical Reports Server (NTRS)
Hinchey, Michael Gerard (Inventor); Rouff, Christopher A. (Inventor); Rash, James Larry (Inventor)
2009-01-01
A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
EEG feature selection method based on decision tree.
Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun
2015-01-01
This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
Quality data collection and management technology of aerospace complex product assembly process
NASA Astrophysics Data System (ADS)
Weng, Gang; Liu, Jianhua; He, Yongxi; Zhuang, Cunbo
2017-04-01
Aiming at solving problems of difficult management and poor traceability for discrete assembly process quality data, a data collection and management method is proposed which take the assembly process and BOM as the core. Data collection method base on workflow technology, data model base on BOM and quality traceability of assembly process is included in the method. Finally, assembly process quality data management system is developed and effective control and management of quality information for complex product assembly process is realized.
Real-time biscuit tile image segmentation method based on edge detection.
Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter
2018-05-01
In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
How Many Batches Are Needed for Process Validation under the New FDA Guidance?
Yang, Harry
2013-01-01
The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.
Compressive sensing method for recognizing cat-eye effect targets.
Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo
2013-10-01
This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.
Space-based optical image encryption.
Chen, Wen; Chen, Xudong
2010-12-20
In this paper, we propose a new method based on a three-dimensional (3D) space-based strategy for the optical image encryption. The two-dimensional (2D) processing of a plaintext in the conventional optical encryption methods is extended to a 3D space-based processing. Each pixel of the plaintext is considered as one particle in the proposed space-based optical image encryption, and the diffraction of all particles forms an object wave in the phase-shifting digital holography. The effectiveness and advantages of the proposed method are demonstrated by numerical results. The proposed method can provide a new optical encryption strategy instead of the conventional 2D processing, and may open up a new research perspective for the optical image encryption.
NASA Astrophysics Data System (ADS)
Ţîţu, M. A.; Pop, A. B.; Ţîţu, Ș
2017-06-01
This paper presents a study on the modelling and optimization of certain variables by using the Taguchi Method with a view to modelling and optimizing the process of pressing tappets into anchors, process conducted in an organization that promotes knowledge-based management. The paper promotes practical concepts of the Taguchi Method and describes the way in which the objective functions are obtained and used during the modelling and optimization of the process of pressing tappets into the anchors.
Business Process-Based Resource Importance Determination
NASA Astrophysics Data System (ADS)
Fenz, Stefan; Ekelhart, Andreas; Neubauer, Thomas
Information security risk management (ISRM) heavily depends on realistic impact values representing the resources’ importance in the overall organizational context. Although a variety of ISRM approaches have been proposed, well-founded methods that provide an answer to the following question are still missing: How can business processes be used to determine resources’ importance in the overall organizational context? We answer this question by measuring the actual importance level of resources based on business processes. Therefore, this paper presents our novel business process-based resource importance determination method which provides ISRM with an efficient and powerful tool for deriving realistic resource importance figures solely from existing business processes. The conducted evaluation has shown that the calculation results of the developed method comply to the results gained in traditional workshop-based assessments.
Berger, Sebastian T; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno
2015-10-01
We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Berger, Sebastian T.; Ahmed, Saima; Muntel, Jan; Cuevas Polo, Nerea; Bachur, Richard; Kentsis, Alex; Steen, Judith; Steen, Hanno
2015-01-01
We describe a 96-well plate compatible membrane-based proteomic sample processing method, which enables the complete processing of 96 samples (or multiples thereof) within a single workday. This method uses a large-pore hydrophobic PVDF membrane that efficiently adsorbs proteins, resulting in fast liquid transfer through the membrane and significantly reduced sample processing times. Low liquid transfer speeds have prevented the useful 96-well plate implementation of FASP as a widely used membrane-based proteomic sample processing method. We validated our approach on whole-cell lysate and urine and cerebrospinal fluid as clinically relevant body fluids. Without compromising peptide and protein identification, our method uses a vacuum manifold and circumvents the need for digest desalting, making our processing method compatible with standard liquid handling robots. In summary, our new method maintains the strengths of FASP and simultaneously overcomes one of the major limitations of FASP without compromising protein identification and quantification. PMID:26223766
SEIPS-based process modeling in primary care.
Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T
2017-04-01
Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.
SEIPS-Based Process Modeling in Primary Care
Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter
2016-01-01
Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883
System and method for integrating hazard-based decision making tools and processes
Hodgin, C Reed [Westminster, CO
2012-03-20
A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.
Simulation Methods for Poisson Processes in Nonstationary Systems.
1978-08-01
for simulation of nonhomogeneous Poisson processes is stated with log-linear rate function. The method is based on an identity relating the...and relatively efficient new method for simulation of one-dimensional and two-dimensional nonhomogeneous Poisson processes is described. The method is
An object-oriented description method of EPMM process
NASA Astrophysics Data System (ADS)
Jiang, Zuo; Yang, Fan
2017-06-01
In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.
Method and apparatus for decoupled thermo-catalytic pollution control
Tabatabaie-Raissi, Ali; Muradov, Nazim Z.; Martin, Eric
2006-07-11
A new method for design and scale-up of thermocatalytic processes is disclosed. The method is based on optimizing process energetics by decoupling of the process energetics from the DRE for target contaminants. The technique is applicable to high temperature thermocatalytic reactor design and scale-up. The method is based on the implementation of polymeric and other low-pressure drop support for thermocatalytic media as well as the multifunctional catalytic media in conjunction with a novel rotating fluidized particle bed reactor.
Robust adaptive multichannel SAR processing based on covariance matrix reconstruction
NASA Astrophysics Data System (ADS)
Tan, Zhen-ya; He, Feng
2018-04-01
With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
ERIC Educational Resources Information Center
Paleeri, Sankaranarayanan
2015-01-01
Transaction methods and approaches of value education have to change from lecturing to process based methods according to the development of constructivist approach. The process based methods provide creative interpretation and active participation from student side. Teachers have to organize suitable activities to transact values through process…
Method of production of pure hydrogen near room temperature from aluminum-based hydride materials
Pecharsky, Vitalij K.; Balema, Viktor P.
2004-08-10
The present invention provides a cost-effective method of producing pure hydrogen gas from hydride-based solid materials. The hydride-based solid material is mechanically processed in the presence of a catalyst to obtain pure gaseous hydrogen. Unlike previous methods, hydrogen may be obtained from the solid material without heating, and without the addition of a solvent during processing. The described method of hydrogen production is useful for energy conversion and production technologies that consume pure gaseous hydrogen as a fuel.
Expert system for web based collaborative CAE
NASA Astrophysics Data System (ADS)
Hou, Liang; Lin, Zusheng
2006-11-01
An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.
Proportional reasoning as a heuristic-based process: time constraint and dual task considerations.
Gillard, Ellen; Van Dooren, Wim; Schaeken, Walter; Verschaffel, Lieven
2009-01-01
The present study interprets the overuse of proportional solution methods from a dual process framework. Dual process theories claim that analytic operations involve time-consuming executive processing, whereas heuristic operations are fast and automatic. In two experiments to test whether proportional reasoning is heuristic-based, the participants solved "proportional" problems, for which proportional solution methods provide correct answers, and "nonproportional" problems known to elicit incorrect answers based on the assumption of proportionality. In Experiment 1, the available solution time was restricted. In Experiment 2, the executive resources were burdened with a secondary task. Both manipulations induced an increase in proportional answers and a decrease in correct answers to nonproportional problems. These results support the hypothesis that the choice for proportional methods is heuristic-based.
NASA Astrophysics Data System (ADS)
Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.
2017-12-01
We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.
Modeling of Bulk Evaporation and Condensation
NASA Technical Reports Server (NTRS)
Anghaie, S.; Ding, Z.
1996-01-01
This report describes the modeling and mathematical formulation of the bulk evaporation and condensation involved in liquid-vapor phase change processes. An internal energy formulation, for these phase change processes that occur under the constraint of constant volume, was studied. Compared to the enthalpy formulation, the internal energy formulation has a more concise and compact form. The velocity and time scales of the interface movement were obtained through scaling analysis and verified by performing detailed numerical experiments. The convection effect induced by the density change was analyzed and found to be negligible compared to the conduction effect. Two iterative methods for updating the value of the vapor phase fraction, the energy based (E-based) and temperature based (T-based) methods, were investigated. Numerical experiments revealed that for the evaporation and condensation problems the E-based method is superior to the T-based method in terms of computational efficiency. The internal energy formulation and the E-based method were used to compute the bulk evaporation and condensation processes under different conditions. The evolution of the phase change processes was investigated. This work provided a basis for the modeling of thermal performance of multi-phase nuclear fuel elements under variable gravity conditions, in which the buoyancy convection due to gravity effects and internal heating are involved.
A simple method for processing data with least square method
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning
2017-08-01
The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
Learning-based controller for biotechnology processing, and method of using
Johnson, John A.; Stoner, Daphne L.; Larsen, Eric D.; Miller, Karen S.; Tolle, Charles R.
2004-09-14
The present invention relates to process control where some of the controllable parameters are difficult or impossible to characterize. The present invention relates to process control in biotechnology of such systems, but not limited to. Additionally, the present invention relates to process control in biotechnology minerals processing. In the inventive method, an application of the present invention manipulates a minerals bioprocess to find local exterma (maxima or minima) for selected output variables/process goals by using a learning-based controller for bioprocess oxidation of minerals during hydrometallurgical processing. The learning-based controller operates with or without human supervision and works to find processor optima without previously defined optima due to the non-characterized nature of the process being manipulated.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Radiology information system: a workflow-based approach.
Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; van der Aalst, W M P
2009-09-01
Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare.
Kim, Huiyong; Hwang, Sung June; Lee, Kwang Soon
2015-02-03
Among various CO2 capture processes, the aqueous amine-based absorption process is considered the most promising for near-term deployment. However, the performance evaluation of newly developed solvents still requires complex and time-consuming procedures, such as pilot plant tests or the development of a rigorous simulator. Absence of accurate and simple calculation methods for the energy performance at an early stage of process development has lengthened and increased expense of the development of economically feasible CO2 capture processes. In this paper, a novel but simple method to reliably calculate the regeneration energy in a standard amine-based carbon capture process is proposed. Careful examination of stripper behaviors and exploitation of energy balance equations around the stripper allowed for calculation of the regeneration energy using only vapor-liquid equilibrium and caloric data. Reliability of the proposed method was confirmed by comparing to rigorous simulations for two well-known solvents, monoethanolamine (MEA) and piperazine (PZ). The proposed method can predict the regeneration energy at various operating conditions with greater simplicity, greater speed, and higher accuracy than those proposed in previous studies. This enables faster and more precise screening of various solvents and faster optimization of process variables and can eventually accelerate the development of economically deployable CO2 capture processes.
Knowledge information management toolkit and method
Hempstead, Antoinette R.; Brown, Kenneth L.
2006-08-15
A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.
Aligning observed and modelled behaviour based on workflow decomposition
NASA Astrophysics Data System (ADS)
Wang, Lu; Du, YuYue; Liu, Wei
2017-09-01
When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.
Lunar-base construction equipment and methods evaluation
NASA Technical Reports Server (NTRS)
Boles, Walter W.; Ashley, David B.; Tucker, Richard L.
1993-01-01
A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Comparison of pre-processing methods for multiplex bead-based immunoassays.
Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter
2016-08-11
High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.
NASA Astrophysics Data System (ADS)
Zäh, Ralf-Kilian; Mosbach, Benedikt; Hollwich, Jan; Faupel, Benedikt
2017-02-01
To ensure the competitiveness of manufacturing companies it is indispensable to optimize their manufacturing processes. Slight variations of process parameters and machine settings have only marginally effects on the product quality. Therefore, the largest possible editing window is required. Such parameters are, for example, the movement of the laser beam across the component for the laser keyhole welding. That`s why it is necessary to keep the formation of welding seams within specified limits. Therefore, the quality of laser welding processes is ensured, by using post-process methods, like ultrasonic inspection, or special in-process methods. These in-process systems only achieve a simple evaluation which shows whether the weld seam is acceptable or not. Furthermore, in-process systems use no feedback for changing the control variables such as speed of the laser or adjustment of laser power. In this paper the research group presents current results of the research field of Online Monitoring, Online Controlling and Model predictive controlling in laser welding processes to increase the product quality. To record the characteristics of the welding process, tested online methods are used during the process. Based on the measurement data, a state space model is ascertained, which includes all the control variables of the system. Depending on simulation tools the model predictive controller (MPC) is designed for the model and integrated into an NI-Real-Time-System.
Optimization evaluation of cutting technology based on mechanical parts
NASA Astrophysics Data System (ADS)
Wang, Yu
2018-04-01
The relationship between the mechanical manufacturing process and the carbon emission is studied on the basis of the process of the mechanical manufacturing process. The formula of carbon emission calculation suitable for mechanical manufacturing process is derived. Based on this, a green evaluation method for cold machining process of mechanical parts is proposed. The application verification and data analysis of the proposed evaluation method are carried out by an example. The results show that there is a great relationship between the mechanical manufacturing process data and carbon emissions.
NASA Astrophysics Data System (ADS)
Karlitasari, L.; Suhartini, D.; Benny
2017-01-01
The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.
A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio
2010-01-01
A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526
Optoelectronic imaging of speckle using image processing method
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Wang, Pengfei
2018-01-01
A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.
Alum, Absar; Rock, Channah; Abbaszadegan, Morteza
2014-01-01
For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.
On the upscaling of process-based models in deltaic applications
NASA Astrophysics Data System (ADS)
Li, L.; Storms, J. E. A.; Walstra, D. J. R.
2018-03-01
Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.
Impact Assessment and Environmental Evaluation of Various Ammonia Production Processes
NASA Astrophysics Data System (ADS)
Bicer, Yusuf; Dincer, Ibrahim; Vezina, Greg; Raso, Frank
2017-05-01
In the current study, conventional resources-based ammonia generation routes are comparatively studied through a comprehensive life cycle assessment. The selected ammonia generation options range from mostly used steam methane reforming to partial oxidation of heavy oil. The chosen ammonia synthesis process is the most common commercially available Haber-Bosch process. The essential energy input for the methods are used from various conventional resources such as coal, nuclear, natural gas and heavy oil. Using the life cycle assessment methodology, the environmental impacts of selected methods are identified and quantified from cradle to gate. The life cycle assessment outcomes of the conventional resources based ammonia production routes show that nuclear electrolysis-based ammonia generation method yields the lowest global warming and climate change impacts while the coal-based electrolysis options bring higher environmental problems. The calculated greenhouse gas emission from nuclear-based electrolysis is 0.48 kg CO2 equivalent while it is 13.6 kg CO2 per kg of ammonia for coal-based electrolysis method.
Impact Assessment and Environmental Evaluation of Various Ammonia Production Processes.
Bicer, Yusuf; Dincer, Ibrahim; Vezina, Greg; Raso, Frank
2017-05-01
In the current study, conventional resources-based ammonia generation routes are comparatively studied through a comprehensive life cycle assessment. The selected ammonia generation options range from mostly used steam methane reforming to partial oxidation of heavy oil. The chosen ammonia synthesis process is the most common commercially available Haber-Bosch process. The essential energy input for the methods are used from various conventional resources such as coal, nuclear, natural gas and heavy oil. Using the life cycle assessment methodology, the environmental impacts of selected methods are identified and quantified from cradle to gate. The life cycle assessment outcomes of the conventional resources based ammonia production routes show that nuclear electrolysis-based ammonia generation method yields the lowest global warming and climate change impacts while the coal-based electrolysis options bring higher environmental problems. The calculated greenhouse gas emission from nuclear-based electrolysis is 0.48 kg CO 2 equivalent while it is 13.6 kg CO 2 per kg of ammonia for coal-based electrolysis method.
NASA Astrophysics Data System (ADS)
Ganiev, R. F.; Reviznikov, D. L.; Rogoza, A. N.; Slastushenskiy, Yu. V.; Ukrainskiy, L. E.
2017-03-01
A description of a complex approach to investigation of nonlinear wave processes in the human cardiovascular system based on a combination of high-precision methods of measuring a pulse wave, mathematical methods of processing the empirical data, and methods of direct numerical modeling of hemodynamic processes in an arterial tree is given.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
Aristov, Alexander; Nosova, Ekaterina
2017-04-01
The paper focuses on research aimed at creating and testing a new approach to evaluate the processes of aggregation and sedimentation of red blood cells for purpose of its use in clinical laboratory diagnostics. The proposed method is based on photometric analysis of blood sample formed as a sessile drop. The results of clinical approbation of this method are given in the paper. Analysis of the processes occurring in the sample in the form of sessile drop during the process of blood cells sedimentation is described. The results of experimental studies to evaluate the effect of the droplet sample focusing properties on light radiation transmittance are presented. It is shown that this method significantly reduces the sample volume and provides sufficiently high sensitivity to the studied processes.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-10-01
Currently, near infrared spectroscopy (NIRS) has been considered as an efficient tool for achieving process analytical technology(PAT) in the manufacture of traditional Chinese medicine (TCM) products. In this article, the NIRS based process analytical system for the production of salvianolic acid for injection was introduced. The design of the process analytical system was described in detail, including the selection of monitored processes and testing mode, and potential risks that should be avoided. Moreover, the development of relative technologies was also presented, which contained the establishment of the monitoring methods for the elution of polyamide resin and macroporous resin chromatography processes, as well as the rapid analysis method for finished products. Based on author's experience of research and work, several issues in the application of NIRS to the process monitoring and control in TCM production were then raised, and some potential solutions were also discussed. The issues include building the technical team for process analytical system, the design of the process analytical system in the manufacture of TCM products, standardization of the NIRS-based analytical methods, and improving the management of process analytical system. Finally, the prospect for the application of NIRS in the TCM industry was put forward. Copyright© by the Chinese Pharmaceutical Association.
Indigenous lunar construction materials
NASA Technical Reports Server (NTRS)
Rogers, Wayne; Sture, Stein
1991-01-01
The objectives are the following: to investigate the feasibility of the use of local lunar resources for construction of a lunar base structure; to develop a material processing method and integrate the method with design and construction of a pressurized habitation structure; to estimate specifications of the support equipment necessary for material processing and construction; and to provide parameters for systems models of lunar base constructions, supply, and operations. The topics are presented in viewgraph form and include the following: comparison of various lunar structures; guidelines for material processing methods; cast lunar regolith; examples of cast basalt components; cast regolith process; processing equipment; mechanical properties of cast basalt; material properties and structural design; and future work.
Method of plasma etching Ga-based compound semiconductors
Qiu, Weibin; Goddard, Lynford L.
2012-12-25
A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
Jericó, Marli de Carvalho; Castilho, Valéria
2010-09-01
This exploratory case study was performed aiming at implementing the Activity-based Costing (ABC) method in a sterile processing department (SPD) of a major teaching hospital. Data collection was performed throughout 2006. Documentary research techniques and non participant closed observation were used. The ABC implementation allowed for learning the activity-based costing of both the chemical and physical disinfection cycle/load: (dollar 9.95) and (dollar 12.63), respectively; as well as the cost for sterilization by steam under pressure (autoclave) (dollar 31.37) and low temperature steam and gaseous formaldehyde sterilization (LTSF) (dollar 255.28). The information provided by the ABC method has optimized the overall understanding of the cost driver process and provided the foundation for assessing performance and improvement in the SPD processes.
Fault detection of Tennessee Eastman process based on topological features and SVM
NASA Astrophysics Data System (ADS)
Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen
2018-03-01
Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.
Multi-criteria evaluation methods in the production scheduling
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.
2016-08-01
The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.
NASA Astrophysics Data System (ADS)
Kobayashi, Takashi; Komoda, Norihisa
The traditional business process design methods, in which the usecase is the most typical, have no useful framework to design the activity sequence with. Therefore, the design efficiency and quality vary widely according to the designer’s experience and skill. In this paper, to solve this problem, we propose the business events and their state transition model (a basic business event model) based on the language/action perspective, which is the result in the cognitive science domain. In the business process design, using this model, we decide event occurrence conditions so that every event synchronizes with each other. We also propose the design pattern to decide the event occurrence condition (a business event improvement strategy). Lastly, we apply the business process design method based on the business event model and the business event improvement strategy to the credit card issue process and estimate its effect.
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.
2018-02-01
Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.
List, Johann-Mattis; Pathmanathan, Jananan Sylvestre; Lopez, Philippe; Bapteste, Eric
2016-08-20
For a long time biologists and linguists have been noticing surprising similarities between the evolution of life forms and languages. Most of the proposed analogies have been rejected. Some, however, have persisted, and some even turned out to be fruitful, inspiring the transfer of methods and models between biology and linguistics up to today. Most proposed analogies were based on a comparison of the research objects rather than the processes that shaped their evolution. Focusing on process-based analogies, however, has the advantage of minimizing the risk of overstating similarities, while at the same time reflecting the common strategy to use processes to explain the evolution of complexity in both fields. We compared important evolutionary processes in biology and linguistics and identified processes specific to only one of the two disciplines as well as processes which seem to be analogous, potentially reflecting core evolutionary processes. These new process-based analogies support novel methodological transfer, expanding the application range of biological methods to the field of historical linguistics. We illustrate this by showing (i) how methods dealing with incomplete lineage sorting offer an introgression-free framework to analyze highly mosaic word distributions across languages; (ii) how sequence similarity networks can be used to identify composite and borrowed words across different languages; (iii) how research on partial homology can inspire new methods and models in both fields; and (iv) how constructive neutral evolution provides an original framework for analyzing convergent evolution in languages resulting from common descent (Sapir's drift). Apart from new analogies between evolutionary processes, we also identified processes which are specific to either biology or linguistics. This shows that general evolution cannot be studied from within one discipline alone. In order to get a full picture of evolution, biologists and linguists need to complement their studies, trying to identify cross-disciplinary and discipline-specific evolutionary processes. The fact that we found many process-based analogies favoring transfer from biology to linguistics further shows that certain biological methods and models have a broader scope than previously recognized. This opens fruitful paths for collaboration between the two disciplines. This article was reviewed by W. Ford Doolittle and Eugene V. Koonin.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.
Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo
2018-01-12
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.
Intelligent design of permanent magnet synchronous motor based on CBR
NASA Astrophysics Data System (ADS)
Li, Cong; Fan, Beibei
2018-05-01
Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Indicators and Metrics for Evaluating the Sustainability of Chemical Processes
A metric-based method, called GREENSCOPE, has been developed for evaluating process sustainability. Using lab-scale information and engineering assumptions the method evaluates full-scale epresentations of processes in environmental, efficiency, energy and economic areas. The m...
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Chairani, R.; Seniman; Rahmat, R. F.; Abdullah, D.; Napitupulu, D.; Setiawan, M. I.; Albra, W.; Erliana, C. I.; Andayani, U.
2018-03-01
Sperm morphology is still a standard laboratory analysis in diagnosing infertility in men. Manually identification of sperm form is still not accurate, the difficulty in seeing the form of the invisible sperm from the digital microscope image is often a weakness in the process of identification and takes a long time. Therefore, male fertility identification application system is needed Through sperm abnormalities based on sperm morphology (teratospermia). The method used is invariant moment method. This study uses 15 data testing and 20 data training sperm image. That the process of male fertility identification through sperm abnormalities based on sperm morphology (teratospermia) has an accuracy rate of 80.77%. Use of time to process Identification of male fertility through sperm abnormalities Based on sperm morphology (teratospermia) during 0.4369 seconds.
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2013-12-01
Hydrologic modeling often requires the re-use and integration of models from different disciplines to simulate complex environmental systems. Component-based modeling introduces a flexible approach for integrating physical-based processes across disciplinary boundaries. Several hydrologic-related modeling communities have adopted the component-based approach for simulating complex physical systems by integrating model components across disciplinary boundaries in a workflow. However, it is not always straightforward to create these interdisciplinary models due to the lack of sufficient knowledge about a hydrologic process. This shortcoming is a result of using informal methods for organizing and sharing information about a hydrologic process. A knowledge-based ontology provides such standards and is considered the ideal approach for overcoming this challenge. The aims of this research are to present the methodology used in analyzing the basic hydrologic domain in order to identify hydrologic processes, the ontology itself, and how the proposed ontology is integrated with the Water Resources Component (WRC) ontology. The proposed ontology standardizes the definitions of a hydrologic process, the relationships between hydrologic processes, and their associated scientific equations. The objective of the proposed Hydrologic Process (HP) Ontology is to advance the idea of creating a unified knowledge framework for components' metadata by introducing a domain-level ontology for hydrologic processes. The HP ontology is a step toward an explicit and robust domain knowledge framework that can be evolved through the contribution of domain users. Analysis of the hydrologic domain is accomplished using the Formal Concept Approach (FCA), in which the infiltration process, an important hydrologic process, is examined. Two infiltration methods, the Green-Ampt and Philip's methods, were used to demonstrate the implementation of information in the HP ontology. Furthermore, a SPARQL service is provided for semantic-based querying of the ontology.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Fan, Xiaofeng; Wang, Jiangfeng
2016-06-01
The atomization of liquid fuel is a kind of intricate dynamic process from continuous phase to discrete phase. Procedures of fuel spray in supersonic flow are modeled with an Eulerian-Lagrangian computational fluid dynamics methodology. The method combines two distinct techniques and develops an integrated numerical simulation method to simulate the atomization processes. The traditional finite volume method based on stationary (Eulerian) Cartesian grid is used to resolve the flow field, and multi-component Navier-Stokes equations are adopted in present work, with accounting for the mass exchange and heat transfer occupied by vaporization process. The marker-based moving (Lagrangian) grid is utilized to depict the behavior of atomized liquid sprays injected into a gaseous environment, and discrete droplet model 13 is adopted. To verify the current approach, the proposed method is applied to simulate processes of liquid atomization in supersonic cross flow. Three classic breakup models, TAB model, wave model and K-H/R-T hybrid model, are discussed. The numerical results are compared with multiple perspectives quantitatively, including spray penetration height and droplet size distribution. In addition, the complex flow field structures induced by the presence of liquid spray are illustrated and discussed. It is validated that the maker-based Eulerian-Lagrangian method is effective and reliable.
ERIC Educational Resources Information Center
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse
2015-01-01
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com
2016-06-15
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less
Determining e-Portfolio Elements in Learning Process Using Fuzzy Delphi Analysis
ERIC Educational Resources Information Center
Mohamad, Syamsul Nor Azlan; Embi, Mohamad Amin; Nordin, Norazah
2015-01-01
The present article introduces the Fuzzy Delphi method results obtained in the study on determining e-Portfolio elements in learning process for art and design context. This method bases on qualified experts that assure the validity of the collected information. In particular, the confirmation of elements is based on experts' opinion and…
Research on the raw data processing method of the hydropower construction project
NASA Astrophysics Data System (ADS)
Tian, Zhichao
2018-01-01
In this paper, based on the characteristics of the fixed data, this paper compares the various mathematical statistics analysis methods and chooses the improved Grabs criterion to analyze the data, and through the analysis of the data processing, the data processing method is not suitable. It is proved that this method can be applied to the processing of fixed raw data. This paper provides a reference for reasonably determining the effective quota analysis data.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods
ERIC Educational Resources Information Center
Soroush, Masoud; Weinberger, Charles B.
2010-01-01
This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
Improving wavelet denoising based on an in-depth analysis of the camera color processing
NASA Astrophysics Data System (ADS)
Seybold, Tamara; Plichta, Mathias; Stechele, Walter
2015-02-01
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
A KPI-based process monitoring and fault detection framework for large-scale processes.
Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang
2017-05-01
Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li
2017-10-01
To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.
New hydrate formation methods in a liquid-gas medium
NASA Astrophysics Data System (ADS)
Chernov, A. A.; Pil'Nik, A. A.; Elistratov, D. S.; Mezentsev, I. V.; Meleshkin, A. V.; Bartashevich, M. V.; Vlasenko, M. G.
2017-01-01
Conceptually new methods of hydrate formation are proposed. The first one is based on the shock wave impact on a water-bubble medium. It is shown that the hydrate formation rate in this process is typically very high. A gas hydrate of carbon dioxide was produced. The process was experimentally studied using various initial conditions, as well as different external action magnitudes. The obtained experimental data are in good agreement with the proposed model. Other methods are based on the process of boiling liquefied gas in an enclosed volume of water (explosive boiling of a hydrating agent and the organization of cyclic boiling-condensation process). The key features of the methods are the high hydrate formation rate combined with a comparatively low power consumption leading to a great expected efficiency of the technologies based on them. The set of experiments was carried out. Gas hydrates of refrigerant R134a, carbon dioxide and propane were produced. The investigation of decomposition of a generated gas hydrate sample was made. The criteria of intensification of the hydrate formation process are formulated.
New hydrate formation methods in a liquid-gas medium.
Chernov, A A; Pil'nik, A A; Elistratov, D S; Mezentsev, I V; Meleshkin, A V; Bartashevich, M V; Vlasenko, M G
2017-01-18
Conceptually new methods of hydrate formation are proposed. The first one is based on the shock wave impact on a water-bubble medium. It is shown that the hydrate formation rate in this process is typically very high. A gas hydrate of carbon dioxide was produced. The process was experimentally studied using various initial conditions, as well as different external action magnitudes. The obtained experimental data are in good agreement with the proposed model. Other methods are based on the process of boiling liquefied gas in an enclosed volume of water (explosive boiling of a hydrating agent and the organization of cyclic boiling-condensation process). The key features of the methods are the high hydrate formation rate combined with a comparatively low power consumption leading to a great expected efficiency of the technologies based on them. The set of experiments was carried out. Gas hydrates of refrigerant R134a, carbon dioxide and propane were produced. The investigation of decomposition of a generated gas hydrate sample was made. The criteria of intensification of the hydrate formation process are formulated.
New hydrate formation methods in a liquid-gas medium
Chernov, A. A.; Pil’nik, A. A.; Elistratov, D. S.; Mezentsev, I. V.; Meleshkin, A. V.; Bartashevich, M. V.; Vlasenko, M. G.
2017-01-01
Conceptually new methods of hydrate formation are proposed. The first one is based on the shock wave impact on a water-bubble medium. It is shown that the hydrate formation rate in this process is typically very high. A gas hydrate of carbon dioxide was produced. The process was experimentally studied using various initial conditions, as well as different external action magnitudes. The obtained experimental data are in good agreement with the proposed model. Other methods are based on the process of boiling liquefied gas in an enclosed volume of water (explosive boiling of a hydrating agent and the organization of cyclic boiling-condensation process). The key features of the methods are the high hydrate formation rate combined with a comparatively low power consumption leading to a great expected efficiency of the technologies based on them. The set of experiments was carried out. Gas hydrates of refrigerant R134a, carbon dioxide and propane were produced. The investigation of decomposition of a generated gas hydrate sample was made. The criteria of intensification of the hydrate formation process are formulated. PMID:28098194
Conceptual design of distillation-based hybrid separation processes.
Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang
2013-01-01
Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
Sánchez-Herrera, Marissa; Martínez-Cano, Evelia; Maldonado-Santoyo, María; Aparicio-Fernández, Xochitl
2014-06-01
The present study was conducted to analyze the chemical composition, total phenolics content and antioxidant capacity of two whole corn (Zea mays) based meals traditional from Mexico: "traditional pinole" and "seven grain pinole"; and compare it with information available from ready to eat cereal products based on refined corn and whole grain cereals. Proximate analyses (moisture, ash, fat, protein and fiber) were carried out according to the procedures of AOAC, sugars content was determined by HPLC method; calcium and iron were quantified using atomic absorption spectroscopy. Total phenolic compounds were determined by Folin-Ciocalteu spectrophotometric method; the antiradical capacity was determined by DPPH colorimetric method and total antioxidant capacity was determined by FRAP method. Traditional and seven grain pinole presented higher energy content and nutrient density (protein and fat) than processed cereals. Calcium content was higher in processed cereals than pinole; seven grain pinole presented the highest conentration of iron. Polyphenolic concentration was higher in both kinds of pinole compared to processed cereals; traditional pinole presented the highest antioxidant activity measured by DPPH and FRAP methods. The results provide evidence about the important nutrient and antioxidant content of traditional and seven grain pinole compared to processed cereals based on corn and other grains. It is recommended their incorporation in to regular diet as a healthy food, with a good protein level, low sugar content and good antioxidant capacity.
Low-Cost Aqueous Coal Desulfurization
NASA Technical Reports Server (NTRS)
Kalvinskas, J. J.; Vasilakos, N.; Corcoran, W. H.; Grohmann, K.; Rohatgi, N. K.
1982-01-01
Water-based process for desulfurizing coal not only eliminates need for costly organic solvent but removes sulfur more effectively than an earlier solvent-based process. New process could provide low-cost commercial method for converting high-sulfur coal into environmentally acceptable fuel.
Espinoza, Manuel Antonio; Manca, Andrea; Claxton, Karl; Sculpher, Mark
2018-02-01
Evidence about cost-effectiveness is increasingly being used to inform decisions about the funding of new technologies that are usually implemented as guidelines from centralized decision-making bodies. However, there is also an increasing recognition for the role of patients in determining their preferred treatment option. This paper presents a method to estimate the value of implementing a choice-based decision process using the cost-effectiveness analysis toolbox. This value is estimated for 3 alternative scenarios. First, it compares centralized decisions, based on population average cost-effectiveness, against a decision process based on patient choice. Second, it compares centralized decision based on patients' subgroups versus an individual choice-based decision process. Third, it compares a centralized process based on average cost-effectiveness against a choice-based process where patients choose according to a different measure of outcome to that used by the centralized decision maker. The methods are applied to a case study for the management of acute coronary syndrome. It is concluded that implementing a choice-based process of treatment allocation may be an option in collectively funded health systems. However, its value will depend on the specific health problem and the social values considered relevant to the health system. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan
Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.
Study on Stationarity of Random Load Spectrum Based on the Special Road
NASA Astrophysics Data System (ADS)
Yan, Huawen; Zhang, Weigong; Wang, Dong
2017-09-01
In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.
NASA Astrophysics Data System (ADS)
Ambrose, Jesse L.
2017-12-01
Atmospheric Hg measurements are commonly carried out using Tekran® Instruments Corporation's model 2537 Hg vapor analyzers, which employ gold amalgamation preconcentration sampling and detection by thermal desorption (TD) and atomic fluorescence spectrometry (AFS). A generally overlooked and poorly characterized source of analytical uncertainty in those measurements is the method by which the raw Hg atomic fluorescence (AF) signal is processed. Here I describe new software-based methods for processing the raw signal from the Tekran® 2537 instruments, and I evaluate the performances of those methods together with the standard Tekran® internal signal processing method. For test datasets from two Tekran® instruments (one 2537A and one 2537B), I estimate that signal processing uncertainties in Hg loadings determined with the Tekran® method are within ±[1 % + 1.2 pg] and ±[6 % + 0.21 pg], respectively. I demonstrate that the Tekran® method can produce significant low biases (≥ 5 %) not only at low Hg sample loadings (< 5 pg) but also at tropospheric background concentrations of gaseous elemental mercury (GEM) and total mercury (THg) (˜ 1 to 2 ng m-3) under typical operating conditions (sample loadings of 5-10 pg). Signal processing uncertainties associated with the Tekran® method can therefore represent a significant unaccounted for addition to the overall ˜ 10 to 15 % uncertainty previously estimated for Tekran®-based GEM and THg measurements. Signal processing bias can also add significantly to uncertainties in Tekran®-based gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurements, which often derive from Hg sample loadings < 5 pg. In comparison, estimated signal processing uncertainties associated with the new methods described herein are low, ranging from within ±0.053 pg, when the Hg thermal desorption peaks are defined manually, to within ±[2 % + 0.080 pg] when peak definition is automated. Mercury limits of detection (LODs) decrease by 31 to 88 % when the new methods are used in place of the Tekran® method. I recommend that signal processing uncertainties be quantified in future applications of the Tekran® 2537 instruments.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity
Schettini, Raimondo
2018-01-01
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art. PMID:29329268
ERIC Educational Resources Information Center
Karacop, Ataman; Diken, Emine Hatun
2017-01-01
The purpose of this study is to investigate the effects of laboratory approach based on jigsaw method with cooperative learning and confirmatory laboratory approach on university students' cognitive process development in Science teaching laboratory applications, and to determine the opinions of the students on applied laboratory methods. The…
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC
NASA Astrophysics Data System (ADS)
Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.
2018-03-01
This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.
Web-based data collection: detailed methods of a questionnaire and data gathering tool
Cooper, Charles J; Cooper, Sharon P; del Junco, Deborah J; Shipp, Eva M; Whitworth, Ryan; Cooper, Sara R
2006-01-01
There have been dramatic advances in the development of web-based data collection instruments. This paper outlines a systematic web-based approach to facilitate this process through locally developed code and to describe the results of using this process after two years of data collection. We provide a detailed example of a web-based method that we developed for a study in Starr County, Texas, assessing high school students' work and health status. This web-based application includes data instrument design, data entry and management, and data tables needed to store the results that attempt to maximize the advantages of this data collection method. The software also efficiently produces a coding manual, web-based statistical summary and crosstab reports, as well as input templates for use by statistical packages. Overall, web-based data entry using a dynamic approach proved to be a very efficient and effective data collection system. This data collection method expedited data processing and analysis and eliminated the need for cumbersome and expensive transfer and tracking of forms, data entry, and verification. The code has been made available for non-profit use only to the public health research community as a free download [1]. PMID:16390556
Analysis of maizena drying system using temperature control based fuzzy logic method
NASA Astrophysics Data System (ADS)
Arief, Ulfah Mediaty; Nugroho, Fajar; Purbawanto, Sugeng; Setyaningsih, Dyah Nurani; Suryono
2018-03-01
Corn is one of the rice subtitution food that has good potential. Corn can be processed to be a maizena, and it can be used to make type of food that has been made from maizena, viz. Brownies cake, egg roll, and other cookies. Generally, maizena obtained by drying process carried out 2-3 days under the sun. However, drying process not possible during the rainy season. This drying process can be done using an automatic drying tool. This study was to analyze the design result and manufacture of maizena drying system with temperature control based fuzzylogic method. The result show that temperature of drying system with set point 40°C - 60°C work in suitable condition. The level of water content in 15% (BSN) and temperatureat 50°C included in good drying process. Time required to reach the set point of temperature in 50°C is 7.05 minutes. Drying time for 500 gr samples with temperature 50°C and power capacity 127.6 watt was 1 hour. Based on the result, drying process using temperature control based fuzzy logic method can improve energy efficiency than the conventional method of drying using a direct sunlight source with a temperature that cannot be directly controlled by human being causing the quality of drying result of flour is erratic.
A method to evaluate process performance by integrating time and resources
NASA Astrophysics Data System (ADS)
Wang, Yu; Wei, Qingjie; Jin, Shuang
2017-06-01
The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.
An Introduction to the BFS Method and Its Use to Model Binary NiAl Alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Noebe, Ronald D.; Ferrante, J.; Amador, C.
1998-01-01
We introduce the Bozzolo-Ferrante-Smith (BFS) method for alloys as a computationally efficient tool for aiding in the process of alloy design. An intuitive description of the BFS method is provided, followed by a formal discussion of its implementation. The method is applied to the study of the defect structure of NiAl binary alloys. The groundwork is laid for a detailed progression to higher order NiAl-based alloys linking theoretical calculations and computer simulations based on the BFS method and experimental work validating each step of the alloy design process.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
NASA Astrophysics Data System (ADS)
Feng, Shou; Fu, Ping; Zheng, Wenbin
2018-03-01
Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Physics-based signal processing algorithms for micromachined cantilever arrays
Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W
2013-11-19
A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.
NASA Astrophysics Data System (ADS)
Nerita, S.; Maizeli, A.; Afza, A.
2017-09-01
Process Evaluation and Learning Outcomes of Biology subjects discusses the evaluation process in learning and application of designed and processed learning outcomes. Some problems found during this subject was the student difficult to understand the subject and the subject unavailability of learning resources that can guide and make students independent study. So, it necessary to develop a learning resource that can make active students to think and to make decisions with the guidance of the lecturer. The purpose of this study is to produce handout based on guided discovery method that match the needs of students. The research was done by using 4-D models and limited to define phase that is student requirement analysis. Data obtained from the questionnaire and analyzed descriptively. The results showed that the average requirement of students was 91,43%. Can be concluded that students need a handout based on guided discovery method in the learning process.
NASA Astrophysics Data System (ADS)
Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.
2017-11-01
This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.
Assessing Commercial and Alternative Poultry Processing Methods using Microbiome Analyses
USDA-ARS?s Scientific Manuscript database
Assessing poultry processing methods/strategies has historically used culture-based methods to assess bacterial changes or reductions, both in terms of general microbial communities (e.g. total aerobic bacteria) or zoonotic pathogens of interest (e.g. Salmonella, Campylobacter). The advent of next ...
Molloy Elreda, Lauren; Coatsworth, J Douglas; Gest, Scott D; Ram, Nilam; Bamberger, Katharine
2016-11-01
Although the majority of evidence-based programs are designed for group delivery, group process and its role in participant outcomes have received little empirical attention. Data were collected from 20 groups of participants (94 early adolescents, 120 parents) enrolled in an efficacy trial of a mindfulness-based adaptation of the Strengthening Families Program (MSFP). Following each weekly session, participants reported on their relations to group members. Social network analysis and methods sensitive to intraindividual variability were integrated to examine weekly covariation between group process and participant progress, and to predict post-intervention outcomes from levels and changes in group process. Results demonstrate hypothesized links between network indices of group process and intervention outcomes and highlight the value of this unique analytic approach to studying intervention group process.
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing
Yan, Leyang; Zhang, Hui; Ye, Peiqing
2017-01-01
Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method. PMID:28383505
Differentiating location- and distance-based processes in memory for time: an ERP study.
Curran, Tim; Friedman, William J
2003-09-01
Memory for the time of events may benefit from reconstructive, location-based, and distance-based processes, but these processes are difficult to dissociate with behavioral methods. Neuropsychological research has emphasized the contribution of prefrontal brain mechanisms to memory for time but has not clearly differentiated location- from distance-based processing. The present experiment recorded event-related brain potentials (ERPs) while subjects completed two different temporal memory tests, designed to emphasize either location- or distance-based processing. The subjects' reports of location-based versus distance-based strategies and the reaction time pattern validated our experimental manipulation. Late (800-1,800 msec) frontal ERP effects were related to location-based processing. The results provide support for a two-process theory of memory for time and suggest that frontal memory mechanisms are specifically related to reconstructive, location-based processing.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
NASA Astrophysics Data System (ADS)
Timchenko, E. V.; Timchenko, P. E.; Pisareva, E. V.; Vlasov, M. Yu; Revin, V. V.; Klenova, N. A.; Asadova, A. A.
2017-01-01
In this article we present the research results of lyophilization process influence on the composition of hybrid materials based on the bacterial cellulose (BC) using Raman spectroscopy method. As an object of research was used BC, as well as hybrids based on it, comprising the various combinations of hydroxyapatite (HAP) and collagen. Our studies showed that during the lyophilization process changes the ratio of the individual components. It was found that for samples hybrid based on BC with addition of HAP occurs increase of PO4 3- peak intensity in the region 956 cm-1 with decreasing width, which indicates a change in the degree of HAP crystallinity.
Wan, Xiaofang; Guo, Congbao; Feng, Jiarui; Yu, Teng; Chai, Xin-Sheng; Chen, Guangxue; Xie, Wei-Qi
2017-08-16
This study reports on a headspace-based gas chromatography (HS-GC) technique for determining the degree of substitution (DS) of cationic guar gum during the synthesis process. The method is based on the determination of 2,3-epoxypropyltrimethylammonium chloride in the process medium. After a modest pretreatment procedure, the sample was added to a headspace vial containing bicarbonate solution for measurement of evolved CO 2 by HS-GC. The results showed that the method had a good precision (relative standard deviation of <3.60%) and accuracy for the 2,3-epoxypropyltrimethylammonium chloride measurement, with recoveries in the range of 96-102%, matching with the data obtained by a reference method, and were within 12% of the values obtained by the more arduous Kjeldahl method for the calculated DS of cationic guar gum. The HS-GC method requires only a small volume of sample and, thus, is suitable for determining the DS of cationic guar gum in laboratory-scale process-related applications.
A MUSIC-based method for SSVEP signal processing.
Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei
2016-03-01
The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
UXDs-Driven Transferring Method from TRIZ Solution to Domain Solution
NASA Astrophysics Data System (ADS)
Ma, Lihui; Cao, Guozhong; Chang, Yunxia; Wei, Zihui; Ma, Kai
The translation process from TRIZ solutions to domain solutions is an analogy-based process. TRIZ solutions, such as 40 inventive principles and the related cases, are medium-solutions for domain problems. Unexpected discoveries (UXDs) are the key factors to trigger designers to generate new ideas for domain solutions. The Algorithm of UXD resolving based on Means-Ends Analysis(MEA) is studied and an UXDs-driven transferring method from TRIZ solution to domain solution is formed. A case study shows the application of the process.
Signal Processing Studies of a Simulated Laser Doppler Velocimetry-Based Acoustic Sensor
1990-10-17
investigated using spectral correlation methods. Results indicate that it may be possible to extend demonstrated LDV-based acoustic sensor sensitivities using higher order processing techniques. (Author)
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Jordan, A; Chen, D; Yi, Q-L; Kanias, T; Gladwin, M T; Acker, J P
2016-07-01
Quality control (QC) data collected by blood services are used to monitor production and to ensure compliance with regulatory standards. We demonstrate how analysis of quality control data can be used to highlight the sources of variability within red cell concentrates (RCCs). We merged Canadian Blood Services QC data with manufacturing and donor records for 28 227 RCC between June 2011 and October 2014. Units were categorized based on processing method, bag manufacturer, donor age and donor sex, then assessed based on product characteristics: haemolysis and haemoglobin levels, unit volume, leucocyte count and haematocrit. Buffy-coat method (top/bottom)-processed units exhibited lower haemolysis than units processed using the whole-blood filtration method (top/top). Units from female donors exhibited lower haemolysis than male donations. Processing method influenced unit volume and the ratio of additive solution to residual plasma. Stored red blood cell characteristics are influenced by prestorage processing and donor factors. Understanding the relationship between processing, donors and RCC quality will help blood services to ensure the safety of transfused products. © 2016 International Society of Blood Transfusion.
An Improved Spectral Analysis Method for Fatigue Damage Assessment of Details in Liquid Cargo Tanks
NASA Astrophysics Data System (ADS)
Zhao, Peng-yuan; Huang, Xiao-ping
2018-03-01
Errors will be caused in calculating the fatigue damages of details in liquid cargo tanks by using the traditional spectral analysis method which is based on linear system, for the nonlinear relationship between the dynamic stress and the ship acceleration. An improved spectral analysis method for the assessment of the fatigue damage in detail of a liquid cargo tank is proposed in this paper. Based on assumptions that the wave process can be simulated by summing the sinusoidal waves in different frequencies and the stress process can be simulated by summing the stress processes induced by these sinusoidal waves, the stress power spectral density (PSD) is calculated by expanding the stress processes induced by the sinusoidal waves into Fourier series and adding the amplitudes of each harmonic component with the same frequency. This analysis method can take the nonlinear relationship into consideration and the fatigue damage is then calculated based on the PSD of stress. Take an independent tank in an LNG carrier for example, the accuracy of the improved spectral analysis method is proved much better than that of the traditional spectral analysis method by comparing the calculated damage results with the results calculated by the time domain method. The proposed spectral analysis method is more accurate in calculating the fatigue damages in detail of ship liquid cargo tanks.
Operational stability prediction in milling based on impact tests
NASA Astrophysics Data System (ADS)
Kiss, Adam K.; Hajdu, David; Bachrathy, Daniel; Stepan, Gabor
2018-03-01
Chatter detection is usually based on the analysis of measured signals captured during cutting processes. These techniques, however, often give ambiguous results close to the stability boundaries, which is a major limitation in industrial applications. In this paper, an experimental chatter detection method is proposed based on the system's response for perturbations during the machining process, and no system parameter identification is required. The proposed method identifies the dominant characteristic multiplier of the periodic dynamical system that models the milling process. The variation of the modulus of the largest characteristic multiplier can also be monitored, the stability boundary can precisely be extrapolated, while the manufacturing parameters are still kept in the chatter-free region. The method is derived in details, and also verified experimentally in laboratory environment.
Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E
2017-08-15
Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Optimal filtering and Bayesian detection for friction-based diagnostics in machines.
Ray, L R; Townsend, J R; Ramasubramanian, A
2001-01-01
Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method
NASA Astrophysics Data System (ADS)
Chen, Xiaomin; Wang, Gang
2017-05-01
The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.
Study on process evaluation model of students' learning in practical course
NASA Astrophysics Data System (ADS)
Huang, Jie; Liang, Pei; Shen, Wei-min; Ye, Youxiang
2017-08-01
In practical course teaching based on project object method, the traditional evaluation methods include class attendance, assignments and exams fails to give incentives to undergraduate students to learn innovatively and autonomously. In this paper, the element such as creative innovation, teamwork, document and reporting were put into process evaluation methods, and a process evaluation model was set up. Educational practice shows that the evaluation model makes process evaluation of students' learning more comprehensive, accurate, and fairly.
Utilizing Expert Knowledge in Estimating Future STS Costs
NASA Technical Reports Server (NTRS)
Fortner, David B.; Ruiz-Torres, Alex J.
2004-01-01
A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.
A diagnostic prototype of the potable water subsystem of the Space Station Freedom ECLSS
NASA Technical Reports Server (NTRS)
Lukefahr, Brenda D.; Rochowiak, Daniel M.; Benson, Brian L.; Rogers, John S.; Mckee, James W.
1989-01-01
In analyzing the baseline Environmental Control and Life Support System (ECLSS) command and control architecture, various processes are found which would be enhanced by the use of knowledge based system methods of implementation. The most suitable process for prototyping using rule based methods are documented, while domain knowledge resources and other practical considerations are examined. Requirements for a prototype rule based software system are documented. These requirements reflect Space Station Freedom ECLSS software and hardware development efforts, and knowledge based system requirements. A quick prototype knowledge based system environment is researched and developed.
Śliwińska, Anna; Burchart-Korol, Dorota; Smoliński, Adam
2017-01-01
This paper presents a life cycle assessment (LCA) of greenhouse gas emissions generated through methanol and electricity co-production system based on coal gasification technology. The analysis focuses on polygeneration technologies from which two products are produced, and thus, issues related to an allocation procedure for LCA are addressed in this paper. In the LCA, two methods were used: a 'system expansion' method based on two approaches, the 'avoided burdens approach' and 'direct system enlargement' methods and an 'allocation' method involving proportional partitioning based on physical relationships in a technological process. Cause-effect relationships in the analysed production process were identified, allowing for the identification of allocation factors. The 'system expansion' method involved expanding the analysis to include five additional variants of electricity production technologies in Poland (alternative technologies). This method revealed environmental consequences of implementation for the analysed technologies. It was found that the LCA of polygeneration technologies based on the 'system expansion' method generated a more complete source of information on environmental consequences than the 'allocation' method. The analysis shows that alternative technologies chosen for generating LCA results are crucial. Life cycle assessment was performed for the analysed, reference and variant alternative technologies. Comparative analysis was performed between the analysed technologies of methanol and electricity co-production from coal gasification as well as a reference technology of methanol production from the natural gas reforming process. Copyright © 2016 Elsevier B.V. All rights reserved.
Process safety improvement--quality and target zero.
Van Scyoc, Karl
2008-11-15
Process safety practitioners have adopted quality management principles in design of process safety management systems with positive effect, yet achieving safety objectives sometimes remain a distant target. Companies regularly apply tools and methods which have roots in quality and productivity improvement. The "plan, do, check, act" improvement loop, statistical analysis of incidents (non-conformities), and performance trending popularized by Dr. Deming are now commonly used in the context of process safety. Significant advancements in HSE performance are reported after applying methods viewed as fundamental for quality management. In pursuit of continual process safety improvement, the paper examines various quality improvement methods, and explores how methods intended for product quality can be additionally applied to continual improvement of process safety. Methods such as Kaizen, Poke yoke, and TRIZ, while long established for quality improvement, are quite unfamiliar in the process safety arena. These methods are discussed for application in improving both process safety leadership and field work team performance. Practical ways to advance process safety, based on the methods, are given.
NASA Astrophysics Data System (ADS)
Kovalenko, Iaroslav; Verron, Sylvain; Garan, Maryna; Šafka, Jiří; Moučka, Michal
2017-04-01
This article describes a method of in-situ process monitoring in the digital light processing (DLP) 3D printer. It is based on the continuous measurement of the adhesion force between printing surface and bottom of a liquid resin bath. This method is suitable only for the bottom-up DPL printers. Control system compares the force at the moment of unsticking of printed layer from the bottom of the tank, when it has the largest value in printing cycle, with theoretical value. Implementation of suggested algorithm can make detection of faults during the printing process possible.
Williamson, J; Ranyard, R; Cuthbert, L
2000-05-01
This study is an evaluation of a process tracing method developed for naturalistic decisions, in this case a consumer choice task. The method is based on Huber et al.'s (1997) Active Information Search (AIS) technique, but develops it by providing spoken rather than written answers to respondents' questions, and by including think aloud instructions. The technique is used within a conversation-based situation, rather than the respondent thinking aloud 'into an empty space', as is conventionally the case in think aloud techniques. The method results in a concurrent verbal protocol as respondents make their decisions, and a retrospective report in the form of a post-decision summary. The method was found to be virtually non-reactive in relation to think aloud, although the variable of Preliminary Attribute Elicitation showed some evidence of reactivity. This was a methodological evaluation, and as such the data reported are essentially descriptive. Nevertheless, the data obtained indicate that the method is capable of producing information about decision processes which could have theoretical importance in terms of evaluating models of decision-making.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Mode extraction on wind turbine blades via phase-based video motion estimation
NASA Astrophysics Data System (ADS)
Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu
2017-04-01
In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
Conceptual Chemical Process Design for Sustainability. ...
This chapter examines the sustainable design of chemical processes, with a focus on conceptual design, hierarchical and short-cut methods, and analyses of process sustainability for alternatives. The chapter describes a methodology for incorporating process sustainability analyses throughout the conceptual design. Hierarchical and short-cut decision-making methods will be used to approach sustainability. An example showing a sustainability-based evaluation of chlor-alkali production processes is presented with economic analysis and five pollutants described as emissions. These emissions are analyzed according to their human toxicity potential by ingestion using the Waste Reduction Algorithm and a method based on US Environmental Protection Agency reference doses, with the addition of biodegradation for suitable components. Among the emissions, mercury as an element will not biodegrade, and results show the importance of this pollutant to the potential toxicity results and therefore the sustainability of the process design. The dominance of mercury in determining the long-term toxicity results when energy use is included suggests that all process system evaluations should (re)consider the role of mercury and other non-/slow-degrading pollutants in sustainability analyses. The cycling of nondegrading pollutants through the biosphere suggests the need for a complete analysis based on the economic, environmental, and social aspects of sustainability. Chapter reviews
Method for fabricating beryllium-based multilayer structures
Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.
2003-02-18
Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).
Interferometric architectures based All-Optical logic design methods and their implementations
NASA Astrophysics Data System (ADS)
Singh, Karamdeep; Kaur, Gurmeet
2015-06-01
All-Optical Signal Processing is an emerging technology which can avoid costly Optical-electronic-optical (O-E-O) conversions which are usually compulsory in traditional Electronic Signal Processing systems, thus greatly enhancing operating bit rate with some added advantages such as electro-magnetic interference immunity and low power consumption etc. In order to implement complex signal processing tasks All-Optical logic gates are required as backbone elements. This review describes the advances in the field of All-Optical logic design methods based on interferometric architectures such as Mach-Zehnder Interferometer (MZI), Sagnac Interferometers and Ultrafast Non-Linear Interferometer (UNI). All-Optical logic implementations for realization of arithmetic and signal processing applications based on each interferometric arrangement are also presented in a categorized manner.
Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.
Menges, Achim
2012-03-01
Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
ERIC Educational Resources Information Center
Jesness, Bradley
This paper examines concepts in information-processing theory which are likely to be relevant to development and characterizes the methods and data upon which the concepts are based. Among the concepts examined are those which have slight empirical grounds. Other concepts examined are those which seem to have empirical bases but which are…
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
NASA Astrophysics Data System (ADS)
Budzan, Sebastian
2018-04-01
In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.
Multiple approaches to fine-grained indexing of the biomedical literature.
Neveol, Aurelie; Shooshan, Sonya E; Humphrey, Susanne M; Rindflesh, Thomas C; Aronson, Alan R
2007-01-01
The number of articles in the MEDLINE database is expected to increase tremendously in the coming years. To ensure that all these documents are indexed with continuing high quality, it is necessary to develop tools and methods that help the indexers in their daily task. We present three methods addressing a novel aspect of automatic indexing of the biomedical literature, namely producing MeSH main heading/subheading pair recommendations. The methods, (dictionary-based, post- processing rules and Natural Language Processing rules) are described and evaluated on a genetics-related corpus. The best overall performance is obtained for the subheading genetics (70% precision and 17% recall with post-processing rules, 48% precision and 37% recall with the dictionary-based method). Future work will address extending this work to all MeSH subheadings and a more thorough study of method combination.
ERIC Educational Resources Information Center
Voet, Michiel; De Wever, Bram
2017-01-01
The present study explores secondary school history teachers' knowledge of inquiry methods. To do so, a process model, outlining five core cognitive processes of inquiry in the history classroom, was developed based on a review of the literature. This process model was then used to analyze think-aloud protocols of 20 teachers' reasoning during an…
Apparatus for decoupled thermo-photocatalytic pollution control
Tabatabaie-Raissi, Ali; Muradov, Nazim Z.; Martin, Eric
2003-04-22
A new method for design and scale-up of photocatalytic and thermocatalytic processes is disclosed. The method is based on optimizing photoprocess energetics by decoupling of the process energy efficiency from the DRE for target contaminants. The technique is applicable to photo-thermocatalytic reactor design and scale-up. At low irradiance levels, the method is based on the implementation of low pressure drop biopolymeric and synthetic polymeric support for titanium dioxide and other band-gap media. At high irradiance levels, the method utilizes multifunctional metal oxide aerogels and other media within a novel rotating fluidized particle bed reactor.
Target Detection and Classification Using Seismic and PIR Sensors
2012-06-01
time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The
Ratcliffe, M B; Khan, J H; Magee, K M; McElhinney, D B; Hubner, C
2000-06-01
Using a Java-based intranet program (applet), we collected postoperative process data after coronary artery bypass grafting. A Java-based applet was developed and deployed on a hospital intranet. Briefly, the nurse entered patient process data using a point and click interface. The applet generated a nursing note, and process data were saved in a Microsoft Access database. In 10 patients, this method was validated by comparison with a retrospective chart review. In 45 consecutive patients, weekly control charts were generated from the data. When aberrations from the pathway occurred, feedback was initiated to restore the goals of the critical pathway. The intranet process data collection method was verified by a manual chart review with 98% sensitivity. The control charts for time to extubation, intensive care unit stay, and hospital stay showed a deviation from critical pathway goals after the first 20 patients. Feedback modulation was associated with a return to critical pathway goals. Java-based applets are inexpensive and can collect accurate postoperative process data, identify critical pathway deviations, and allow timely feedback of process data.
2012-01-01
Background Optimization of the clinical care process by integration of evidence-based knowledge is one of the active components in care pathways. When studying the impact of a care pathway by using a cluster-randomized design, standardization of the care pathway intervention is crucial. This methodology paper describes the development of the clinical content of an evidence-based care pathway for in-hospital management of chronic obstructive pulmonary disease (COPD) exacerbation in the context of a cluster-randomized controlled trial (cRCT) on care pathway effectiveness. Methods The clinical content of a care pathway for COPD exacerbation was developed based on recognized process design and guideline development methods. Subsequently, based on the COPD case study, a generalized eight-step method was designed to support the development of the clinical content of an evidence-based care pathway. Results A set of 38 evidence-based key interventions and a set of 24 process and 15 outcome indicators were developed in eight different steps. Nine Belgian multidisciplinary teams piloted both the set of key interventions and indicators. The key intervention set was judged by the teams as being valid and clinically applicable. In addition, the pilot study showed that the indicators were feasible for the involved clinicians and patients. Conclusions The set of 38 key interventions and the set of process and outcome indicators were found to be appropriate for the development and standardization of the clinical content of the COPD care pathway in the context of a cRCT on pathway effectiveness. The developed eight-step method may facilitate multidisciplinary teams caring for other patient populations in designing the clinical content of their future care pathways. PMID:23190552
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Internal passivation of Al-based microchannel devices by electrochemical anodization
NASA Astrophysics Data System (ADS)
Hymel, Paul J.; Guan, D. S.; Mu, Yang; Meng, W. J.; Meng, Andrew C.
2015-02-01
Metal-based microchannel devices have wide-ranging applications. We report here a method to electrochemically anodize the internal surfaces of Al microchannels, with the purpose of forming a uniform and dense anodic aluminum oxide (AAO) layer on microchannel internal surfaces for chemical passivation and corrosion resistance. A pulsed electrolyte flow was utilized to emulate conventional anodization processes while replenishing depleted ionic species within Al microtubes and microchannels. After anodization, the AAO film was sealed in hot water to close the nanopores. Focused ion beam (FIB) sectioning, scanning electron microscopy (SEM), and energy dispersive spectroscopy (EDS) were utilized to characterize the AAO morphology and composition. Potentiodynamic polarization corrosion testing of anodized Al microtube half-sections in a NaCl solution showed an order of magnitude decrease in anodic corrosion current when compared to an unanodized tube. The surface passivation process was repeated for Al-based microchannel heat exchangers. A corrosion testing method based on the anodization process showed higher resistance to ion transport through the anodized specimens than unanodized specimens, thus verifying the internal anodization and sealing process as a viable method for surface passivation of Al microchannel devices.
Ellingson, Laura D; Hibbing, Paul R; Kim, Youngwon; Frey-Law, Laura A; Saint-Maurice, Pedro F; Welk, Gregory J
2017-06-01
The wrist is increasingly being used as the preferred site for objectively assessing physical activity but the relative accuracy of processing methods for wrist data has not been determined. This study evaluates the validity of four processing methods for wrist-worn ActiGraph (AG) data against energy expenditure (EE) measured using a portable metabolic analyzer (OM; Oxycon mobile) and the Compendium of physical activity. Fifty-one adults (ages 18-40) completed 15 activities ranging from sedentary to vigorous in a laboratory setting while wearing an AG and the OM. Estimates of EE and categorization of activity intensity were obtained from the AG using a linear method based on Hildebrand cutpoints (HLM), a non-linear modification of this method (HNLM), and two methods developed by Staudenmayer based on a Linear Model (SLM) and using random forest (SRF). Estimated EE and classification accuracy were compared to the OM and Compendium using Bland-Altman plots, equivalence testing, mean absolute percent error (MAPE), and Kappa statistics. Overall, classification agreement with the Compendium was similar across methods ranging from a Kappa of 0.46 (HLM) to 0.54 (HNLM). However, specificity and sensitivity varied by method and intensity, ranging from a sensitivity of 0% (HLM for sedentary) to a specificity of ~99% for all methods for vigorous. None of the methods was significantly equivalent to the OM (p > 0.05). Across activities, none of the methods evaluated had a high level of agreement with criterion measures. Additional research is needed to further refine the accuracy of processing wrist-worn accelerometer data.
NASA Astrophysics Data System (ADS)
Takemine, S.; Rikimaru, A.; Takahashi, K.
The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.
Liu, Shuai; Li, Fei; Li, Yan; Li, Weifei; Xu, Jinkai; Du, Hong
2017-07-31
Aconitum species are well-known for their medicinal value and high lethal toxicity in many Asian countries, notably China, India and Japan. The tubers are only used after processing in Traditional Chinese Medicine (TCM). They can be used safely and effectively with the methods of decoction, rational compatibility, and correct processing based on traditional experiences and new technologies. However, high toxicological risks still remain due to improper preparation and usage in China and other countries. Therefore, there is a need to clarify the methods of processing and compatibility to ensure their effectiveness and minimize the potential risks. The aim of this paper is to provide a review of traditional and current methods used to potentially reduce toxicity of Aconitum roots in TCM. The use of Aconitum has been investigated and the methods of processing and compatibility throughout history, including recent research, have been reviewed. Using of the methods of rational preparation, reasonable compatibility, and proper processing based on traditional experiences and new technologies, can enable Aconitum to be used safely and effectively. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Balancing Act: How to Capture Knowledge without Killing It.
ERIC Educational Resources Information Center
Brown, John Seely; Duguid, Paul
2000-01-01
Top-down processes for institutionalizing ideas can stifle creativity. Xerox researchers learned how to combine process-based and practice-based methods in order to disseminate best practices from a community of repair technicians. (JOW)
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.
Cleveland, Emily C; Albano, Nicholas J; Hazen, Alexes
2015-10-01
The use of autologous adipose tissue harvested through liposuction techniques for soft-tissue augmentation has become commonplace among cosmetic and reconstructive surgeons alike. Despite its longstanding use in the plastic surgery community, substantial controversy remains regarding the optimal method of processing harvested lipoaspirate before grafting. This evidence-based review builds on prior examinations of the literature to evaluate both established and novel methods for lipoaspirate processing. A comprehensive, systematic review of the literature was conducted using Ovid MEDLINE in January of 2015 to identify all relevant publications subsequent to the most recent review on this topic. Randomized controlled trials, clinical trials, and comparative studies comparing at least two of the following techniques were included: decanting, cotton gauze (Telfa) rolling, centrifugation, washing, filtration, and stromal vascular fraction isolation. Nine articles comparing various methods of processing human fat for autologous grafting were selected based on inclusion and exclusion criteria. Five compared established processing techniques (i.e., decanting, cotton gauze rolling, centrifugation, and washing) and four publications evaluated newer proprietary technologies, including washing, filtration, and/or methods to isolate stromal vascular fraction. The authors failed to find compelling evidence to advocate a single technique as the superior method for processing lipoaspirate in preparation for autologous fat grafting. A paucity of high-quality data continues to limit the clinician's ability to determine the optimal method for purifying harvested adipose tissue. Novel automated technologies hold promise, particularly for large-volume fat grafting; however, extensive additional research is required to understand their true utility and efficiency in clinical settings.
Zhang, Hanyuan; Tian, Xuemin; Deng, Xiaogang; Cao, Yuping
2018-05-16
As an attractive nonlinear dynamic data analysis tool, global preserving kernel slow feature analysis (GKSFA) has achieved great success in extracting the high nonlinearity and inherently time-varying dynamics of batch process. However, GKSFA is an unsupervised feature extraction method and lacks the ability to utilize batch process class label information, which may not offer the most effective means for dealing with batch process monitoring. To overcome this problem, we propose a novel batch process monitoring method based on the modified GKSFA, referred to as discriminant global preserving kernel slow feature analysis (DGKSFA), by closely integrating discriminant analysis and GKSFA. The proposed DGKSFA method can extract discriminant feature of batch process as well as preserve global and local geometrical structure information of observed data. For the purpose of fault detection, a monitoring statistic is constructed based on the distance between the optimal kernel feature vectors of test data and normal data. To tackle the challenging issue of nonlinear fault variable identification, a new nonlinear contribution plot method is also developed to help identifying the fault variable after a fault is detected, which is derived from the idea of variable pseudo-sample trajectory projection in DGKSFA nonlinear biplot. Simulation results conducted on a numerical nonlinear dynamic system and the benchmark fed-batch penicillin fermentation process demonstrate that the proposed process monitoring and fault diagnosis approach can effectively detect fault and distinguish fault variables from normal variables. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Edge Detection Method Based on Neural Networks for COMS MI Images
NASA Astrophysics Data System (ADS)
Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee
2016-12-01
Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.
Image processing based detection of lung cancer on CT scan images
NASA Astrophysics Data System (ADS)
Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi
2017-10-01
In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.
Zeng, Xiancheng; Hu, Xiangqian; Yang, Weitao
2012-12-11
A fragment-based fractional number of electron (FNE) approach, is developed to study entire electron transfer (ET) processes from the electron donor region to the acceptor region in condensed phase. Both regions are described by the density-fragment interaction (DFI) method while FNE as an efficient ET order parameter is applied to simulate the electron transfer process. In association with the QM/MM energy expression, the DFI-FNE method is demonstrated to describe ET processes robustly with the Ru 2+ -Ru 3+ self-exchange ET as a proof-of-concept example. This method allows for systematic calculations of redox free energies, reorganization energies, and electronic couplings, and the absolute ET rate constants within the Marcus regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao
In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
NASA Astrophysics Data System (ADS)
Zakharov, S. M.; Manykin, Eduard A.
1995-02-01
The principles of optical processing based on dynamic spatial—temporal properties of two-pulse photon echo signals are considered. The properties of a resonant medium as an on-line filter of temporal and spatial frequencies are discussed. These properties are due to the sensitivity of such a medium to the Fourier spectrum of the second exiting pulse. Degeneracy of quantum resonant systems, demonstrated by the coherent response dependence on the square of the amplitude of the second pulse, can be used for 'simultaneous' correlation processing of optical 'signals'. Various methods for the processing of the Fourier optical image are discussed.
Fukunishi, Yoshifumi
2010-01-01
For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
Modelling Coastal Cliff Recession Based on the GIM-DDD Method
NASA Astrophysics Data System (ADS)
Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an
2018-04-01
The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.
NASA Astrophysics Data System (ADS)
Ratnadewi; Pramono Adhie, Roy; Hutama, Yonatan; Saleh Ahmar, A.; Setiawan, M. I.
2018-01-01
Cryptography is a method used to create secure communication by manipulating sent messages during the communication occurred so only intended party that can know the content of that messages. Some of the most commonly used cryptography methods to protect sent messages, especially in the form of text, are DES and 3DES cryptography method. This research will explain the DES and 3DES cryptography method and its use for stored data security in smart cards that working in the NFC-based communication system. Several things that will be explained in this research is the ways of working of DES and 3DES cryptography method in doing the protection process of a data and software engineering through the creation of application using C++ programming language to realize and test the performance of DES and 3DES cryptography method in encrypted data writing process to smart cards and decrypted data reading process from smart cards. The execution time of the entering and the reading process data using a smart card DES cryptography method is faster than using 3DES cryptography.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Liao, Wenjie; van der Werf, Hayo M G; Salmon-Monviola, Jordy
2015-09-15
One of the major challenges in environmental life cycle assessment (LCA) of crop production is the nonlinearity between nitrogen (N) fertilizer inputs and on-site N emissions resulting from complex biogeochemical processes. A few studies have addressed this nonlinearity by combining process-based N simulation models with LCA, but none accounted for nitrate (NO3(-)) flows across fields. In this study, we present a new method, TNT2-LCA, that couples the topography-based simulation of nitrogen transfer and transformation (TNT2) model with LCA, and compare the new method with a current LCA method based on a French life cycle inventory database. Application of the two methods to a case study of crop production in a catchment in France showed that, compared to the current method, TNT2-LCA allows delineation of more appropriate temporal limits when developing data for on-site N emissions associated with specific crops in this catchment. It also improves estimates of NO3(-) emissions by better consideration of agricultural practices, soil-climatic conditions, and spatial interactions of NO3(-) flows across fields, and by providing predicted crop yield. The new method presented in this study provides improved LCA of crop production at the catchment scale.
NASA Astrophysics Data System (ADS)
Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad
2018-06-01
Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.
Full-degrees-of-freedom frequency based substructuring
NASA Astrophysics Data System (ADS)
Drozg, Armin; Čepon, Gregor; Boltežar, Miha
2018-01-01
Dividing the whole system into multiple subsystems and a separate dynamic analysis is common practice in the field of structural dynamics. The substructuring process improves the computational efficiency and enables an effective realization of the local optimization, modal updating and sensitivity analyses. This paper focuses on frequency-based substructuring methods using experimentally obtained data. An efficient substructuring process has already been demonstrated using numerically obtained frequency-response functions (FRFs). However, the experimental process suffers from several difficulties, among which, many of them are related to the rotational degrees of freedom. Thus, several attempts have been made to measure, expand or combine numerical correction methods in order to obtain a complete response model. The proposed methods have numerous limitations and are not yet generally applicable. Therefore, in this paper an alternative approach based on experimentally obtained data only, is proposed. The force-excited part of the FRF matrix is measured with piezoelectric translational and rotational direct accelerometers. The incomplete moment-excited part of the FRF matrix is expanded, based on the modal model. The proposed procedure is integrated in a Lagrange Multiplier Frequency Based Substructuring method and demonstrated on a simple beam structure, where the connection coordinates are mainly associated with the rotational degrees of freedom.
Monge, Paul
2006-01-01
Activity-based methods serve as a dynamic process that has allowed many other industries to reduce and control their costs, increase productivity, and streamline their processes while improving product quality and service. The method could serve the healthcare industry in an equally beneficial way. Activity-based methods encompass both activity based costing (ABC) and activity-based management (ABM). ABC is a cost management approach that links resource consumption to activities that an enterprise performs, and then assigns those activities and their associated costs to customers, products, or product lines. ABM uses the resource assignments derived in ABC so that operation managers can improve their departmental processes and workflows. There are three fundamental problems with traditional cost systems. First, traditional systems fail to reflect the underlying diversity of work taking place within an enterprise. Second, it uses allocations that are, for the most part, arbitrary Single step allocations fail to reflect the real work-the activities being performed and the associate resources actually consumed. Third, they only provide a cost number that, standing alone, does not provide any guidance on how to improve performance by lowering cost or enhancing throughput.
Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods
NASA Technical Reports Server (NTRS)
Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy
2012-01-01
The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.
Schmidt, Marvin; Ullrich, Johannes; Wieczorek, André; Frenzel, Jan; Eggeler, Gunther; Schütze, Andreas; Seelecke, Stefan
2016-01-01
Shape Memory Alloys (SMA) using elastocaloric cooling processes have the potential to be an environmentally friendly alternative to the conventional vapor compression based cooling process. Nickel-Titanium (Ni-Ti) based alloy systems, especially, show large elastocaloric effects. Furthermore, exhibit large latent heats which is a necessary material property for the development of an efficient solid-state based cooling process. A scientific test rig has been designed to investigate these processes and the elastocaloric effects in SMAs. The realized test rig enables independent control of an SMA's mechanical loading and unloading cycles, as well as conductive heat transfer between SMA cooling elements and a heat source/sink. The test rig is equipped with a comprehensive monitoring system capable of synchronized measurements of mechanical and thermal parameters. In addition to determining the process-dependent mechanical work, the system also enables measurement of thermal caloric aspects of the elastocaloric cooling effect through use of a high-performance infrared camera. This combination is of particular interest, because it allows illustrations of localization and rate effects — both important for efficient heat transfer from the medium to be cooled. The work presented describes an experimental method to identify elastocaloric material properties in different materials and sample geometries. Furthermore, the test rig is used to investigate different cooling process variations. The introduced analysis methods enable a differentiated consideration of material, process and related boundary condition influences on the process efficiency. The comparison of the experimental data with the simulation results (of a thermomechanically coupled finite element model) allows for better understanding of the underlying physics of the elastocaloric effect. In addition, the experimental results, as well as the findings based on the simulation results, are used to improve the material properties. PMID:27168093
Schmidt, Marvin; Ullrich, Johannes; Wieczorek, André; Frenzel, Jan; Eggeler, Gunther; Schütze, Andreas; Seelecke, Stefan
2016-05-02
Shape Memory Alloys (SMA) using elastocaloric cooling processes have the potential to be an environmentally friendly alternative to the conventional vapor compression based cooling process. Nickel-Titanium (Ni-Ti) based alloy systems, especially, show large elastocaloric effects. Furthermore, exhibit large latent heats which is a necessary material property for the development of an efficient solid-state based cooling process. A scientific test rig has been designed to investigate these processes and the elastocaloric effects in SMAs. The realized test rig enables independent control of an SMA's mechanical loading and unloading cycles, as well as conductive heat transfer between SMA cooling elements and a heat source/sink. The test rig is equipped with a comprehensive monitoring system capable of synchronized measurements of mechanical and thermal parameters. In addition to determining the process-dependent mechanical work, the system also enables measurement of thermal caloric aspects of the elastocaloric cooling effect through use of a high-performance infrared camera. This combination is of particular interest, because it allows illustrations of localization and rate effects - both important for efficient heat transfer from the medium to be cooled. The work presented describes an experimental method to identify elastocaloric material properties in different materials and sample geometries. Furthermore, the test rig is used to investigate different cooling process variations. The introduced analysis methods enable a differentiated consideration of material, process and related boundary condition influences on the process efficiency. The comparison of the experimental data with the simulation results (of a thermomechanically coupled finite element model) allows for better understanding of the underlying physics of the elastocaloric effect. In addition, the experimental results, as well as the findings based on the simulation results, are used to improve the material properties.
Polarization-insensitive techniques for optical signal processing
NASA Astrophysics Data System (ADS)
Salem, Reza
2006-12-01
This thesis investigates polarization-insensitive methods for optical signal processing. Two signal processing techniques are studied: clock recovery based on two-photon absorption in silicon and demultiplexing based on cross-phase modulation in highly nonlinear fiber. The clock recovery system is tested at an 80 Gb/s data rate for both back-to-back and transmission experiments. The demultiplexer is tested at a 160 Gb/s data rate in a back-to-back experiment. We experimentally demonstrate methods for eliminating polarization dependence in both systems. Our experimental results are confirmed by theoretical and numerical analysis.
NASA Astrophysics Data System (ADS)
Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun
2018-01-01
Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N + 1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.
NASA Astrophysics Data System (ADS)
Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi
2018-03-01
This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.
Disc resonator gyroscope fabrication process requiring no bonding alignment
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill V. (Inventor)
2010-01-01
A method of fabricating a resonant vibratory sensor, such as a disc resonator gyro. A silicon baseplate wafer for a disc resonator gyro is provided with one or more locating marks. The disc resonator gyro is fabricated by bonding a blank resonator wafer, such as an SOI wafer, to the fabricated baseplate, and fabricating the resonator structure according to a pattern based at least in part upon the location of the at least one locating mark of the fabricated baseplate. MEMS-based processing is used for the fabrication processing. In some embodiments, the locating mark is visualized using optical and/or infrared viewing methods. A disc resonator gyroscope manufactured according to these methods is described.
A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing.
Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi
2009-01-01
Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs.
A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing
Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi
2009-01-01
Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs. PMID:22399990
NASA Astrophysics Data System (ADS)
Zan, Hao; Li, Haowei; Jiang, Yuguang; Wu, Meng; Zhou, Weixing; Bao, Wen
2018-06-01
As part of our efforts to find ways and means to further improve the regenerative cooling technology in scramjet, the experiments of thermo-acoustic instability dynamic characteristics of hydrocarbon fuel flowing have been conducted in horizontal circular tubes at different conditions. The experimental results indicate that there is a developing process from thermo-acoustic stability to instability. In order to have a deep understanding on the developing process of thermo-acoustic instability, the method of Multi-scale Shannon Wavelet Entropy (MSWE) based on Wavelet Transform Correlation Filter (WTCF) and Multi-Scale Shannon Entropy (MSE) is adopted in this paper. The results demonstrate that the developing process of thermo-acoustic instability from noise and weak signals is well detected by MSWE method and the differences among the stability, the developing process and the instability can be identified. These properties render the method particularly powerful for warning thermo-acoustic instability of hydrocarbon fuel flowing in scramjet cooling channels. The mass flow rate and the inlet pressure will make an influence on the developing process of the thermo-acoustic instability. The investigation on thermo-acoustic instability dynamic characteristics at supercritical pressure based on wavelet entropy method offers guidance on the control of scramjet fuel supply, which can secure stable fuel flowing in regenerative cooling system.
Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method
Lu, Zhaolin
2017-01-01
Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925
Enterprise Systems Value-Based R&D Portfolio Analytics: Methods, Processes, and Tools
2014-01-14
Enterprise Systems Value-Based R&D Portfolio Analytics: Methods, Processes, and Tools Final Technical Report SERC -2014-TR-041-1 January 14...by the U.S. Department of Defense through the Systems Engineering Research Center ( SERC ) under Contract H98230-08-D-0171 (Task Order 0026, RT 51... SERC is a federally funded University Affiliated Research Center managed by Stevens Institute of Technology Any opinions, findings and
NASA Astrophysics Data System (ADS)
Yao, Yao
2012-05-01
Hydraulic fracturing technology is being widely used within the oil and gas industry for both waste injection and unconventional gas production wells. It is essential to predict the behavior of hydraulic fractures accurately based on understanding the fundamental mechanism(s). The prevailing approach for hydraulic fracture modeling continues to rely on computational methods based on Linear Elastic Fracture Mechanics (LEFM). Generally, these methods give reasonable predictions for hard rock hydraulic fracture processes, but still have inherent limitations, especially when fluid injection is performed in soft rock/sand or other non-conventional formations. These methods typically give very conservative predictions on fracture geometry and inaccurate estimation of required fracture pressure. One of the reasons the LEFM-based methods fail to give accurate predictions for these materials is that the fracture process zone ahead of the crack tip and softening effect should not be neglected in ductile rock fracture analysis. A 3D pore pressure cohesive zone model has been developed and applied to predict hydraulic fracturing under fluid injection. The cohesive zone method is a numerical tool developed to model crack initiation and growth in quasi-brittle materials considering the material softening effect. The pore pressure cohesive zone model has been applied to investigate the hydraulic fracture with different rock properties. The hydraulic fracture predictions of a three-layer water injection case have been compared using the pore pressure cohesive zone model with revised parameters, LEFM-based pseudo 3D model, a Perkins-Kern-Nordgren (PKN) model, and an analytical solution. Based on the size of the fracture process zone and its effect on crack extension in ductile rock, the fundamental mechanical difference of LEFM and cohesive fracture mechanics-based methods is discussed. An effective fracture toughness method has been proposed to consider the fracture process zone effect on the ductile rock fracture.
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Advanced image based methods for structural integrity monitoring: Review and prospects
NASA Astrophysics Data System (ADS)
Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.
2018-02-01
There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.
A PBOM configuration and management method based on templates
NASA Astrophysics Data System (ADS)
Guo, Kai; Qiao, Lihong; Qie, Yifan
2018-03-01
The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.
GREENSCOPE: A Method for Modeling Chemical Process Sustainability
Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Ef...
Chemical Safety Alert: Identifying Chemical Reactivity Hazards Preliminary Screening Method
Introduces small-to-medium-sized facilities to a method developed by Center for Chemical Process Safety (CCPS), based on a series of twelve yes-or-no questions to help determine hazards in warehousing, repackaging, blending, mixing, and processing.
FEM-based strain analysis study for multilayer sheet forming process
NASA Astrophysics Data System (ADS)
Zhang, Rongjing; Lang, Lihui; Zafar, Rizwan
2015-12-01
Fiber metal laminates have many advantages over traditional laminates (e.g., any type of fiber and resin material can be placed anywhere between the metallic layers without risk of failure of the composite fabric sheets). Furthermore, the process requirements to strictly control the temperature and punch force in fiber metal laminates are also less stringent than those in traditional laminates. To further explore the novel method, this study conducts a finite element method-based (FEM-based) strain analysis on multilayer blanks by using the 3A method. Different forming modes such as wrinkling and fracture are discussed by using experimental and numerical studies. Hydroforming is used for multilayer forming. The Barlat 2000 yield criteria and DYNAFORM/LS-DYNA are used for the simulations. Optimal process parameters are determined on the basis of fixed die-binder gap and variable cavity pressure. The results of this study will enhance the knowledge on the mechanics of multilayer structures formed by using the 3A method and expand its commercial applications.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
Performance-Based Assessment: An Alternative Assessment Process for Young Gifted Children.
ERIC Educational Resources Information Center
Hafenstein, Norma Lu; Tucker, Brooke
Performance-based assessment provides an alternative identification method for young gifted children. A performance-based identification process was developed and implemented to select three-, four-, and five-year-old children for inclusion in a school for gifted children. Literature regarding child development, characteristics of young gifted…
NASA Astrophysics Data System (ADS)
Takei, Satoshi; Maki, Hirotaka; Sugahara, Kigen; Ito, Kenta; Hanabata, Makoto
2015-07-01
An electron beam (EB) lithography method using inedible cellulose-based resist material derived from woody biomass has been successfully developed. This method allows the use of pure water in the development process instead of the conventionally used tetramethylammonium hydroxide and anisole. The inedible cellulose-based biomass resist material, as an alternative to alpha-linked disaccharides in sugar derivatives that compete with food supplies, was developed by replacing the hydroxyl groups in the beta-linked disaccharides with EB-sensitive 2-methacryloyloxyethyl groups. A 75 nm line and space pattern at an exposure dose of 19 μC/cm2, a resist thickness uniformity of less than 0.4 nm on a 200 mm wafer, and low film thickness shrinkage under EB irradiation were achieved with this inedible cellulose-based biomass resist material using a water-based development process.
K, Jalal Deen; R, Ganesan; A, Merline
2017-07-27
Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License
K, Jalal Deen; R, Ganesan; A, Merline
2017-01-01
Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan
2016-04-28
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less
Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling
NASA Astrophysics Data System (ADS)
Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo
2017-06-01
Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.
Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost
2016-01-01
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Designing Class Methods from Dataflow Diagrams
NASA Astrophysics Data System (ADS)
Shoval, Peretz; Kabeli-Shani, Judith
A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Continental-Scale Validation of Modis-Based and LEDAPS Landsat ETM + Atmospheric Correction Methods
NASA Technical Reports Server (NTRS)
Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy
2012-01-01
The potential of Landsat data processing to provide systematic continental scale products has been demonstratedby several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent freeavailability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicableto large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correctionmethods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive ProcessingSystem (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmosphericcharacterization approaches. The MODIS-based method uses the MODIS Terra derived dynamicaerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions ineach coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from eachLandsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validationresults are presented comparing ETM+ atmospherically corrected data generated using these two methodswith AERONET corrected ETM+ data for 95 10 km10 km 30 m subsets, a total of nearly 8 million 30 mpixels, located across the conterminous United States. The results indicate that the MODIS-based methodhas better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.
Measurement of smaller colon polyp in CT colonography images using morphological image processing.
Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K
2017-11-01
Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].
A simultaneous deep micromachining and surface passivation method suitable for silicon-based devices
NASA Astrophysics Data System (ADS)
Babaei, E.; Gharooni, M.; Mohajerzadeh, S.; Soleimani, E. A.
2018-07-01
Three novel methods for simultaneous micromachining and surface passivation of silicon are reported. A thin passivation layer is achieved using continuous and sequential plasma processes based on SF6, H2 and O2 gases. Reducing the recombination by surface passivation is crucial for the realization of high-performance nanosized optoelectronic devices. The passivation of the surface as an important step, is feasible by plasma processing based on hydrogen pulses in proper time-slots or using a mixture of H2 and O2, and SF6 gases. The passivation layer which is formed in situ during the micromachining process obviates a separate passivation step needed in conventional methods. By adjusting the plasma parameters such as power, duration, and flows of gases, the process can be controlled for the best results and acceptable under-etching at the same time. Moreover, the pseudo-oxide layer which is formed during the micromachining processes will also improve the electrical characteristics of the surface, which can be used as an add-on for micro and nanowire applications. To quantify the effect of surface passivation in our method, ellipsometry, lifetime measurements, x-ray photoelectron spectroscopy, current–voltage and capacitance–voltage measurements and solar cell testing have been employed.
NASA Astrophysics Data System (ADS)
Bergen, K.; Yoon, C. E.; OReilly, O. J.; Beroza, G. C.
2015-12-01
Recent improvements in computational efficiency for waveform correlation-based detections achieved by new methods such as Fingerprint and Similarity Thresholding (FAST) promise to allow large-scale blind search for similar waveforms in long-duration continuous seismic data. Waveform similarity search applied to datasets of months to years of continuous seismic data will identify significantly more events than traditional detection methods. With the anticipated increase in number of detections and associated increase in false positives, manual inspection of the detection results will become infeasible. This motivates the need for new approaches to process the output of similarity-based detection. We explore data mining techniques for improved detection post-processing. We approach this by considering similarity-detector output as a sparse similarity graph with candidate events as vertices and similarities as weighted edges. Image processing techniques are leveraged to define candidate events and combine results individually processed at multiple stations. Clustering and graph analysis methods are used to identify groups of similar waveforms and assign a confidence score to candidate detections. Anomaly detection and classification are applied to waveform data for additional false detection removal. A comparison of methods will be presented and their performance will be demonstrated on a suspected induced and non-induced earthquake sequence.
Social Workers' Orientation toward the Evidence-Based Practice Process: A Dutch Survey
ERIC Educational Resources Information Center
van der Zwet, Renske J. M.; Kolmer, Deirdre M. Beneken genaamd; Schalk, René
2016-01-01
Objectives: This study assesses social workers' orientation toward the evidence-based practice (EBP) process and explores which specific variables (e.g. age) are associated. Methods: Data were collected from 341 Dutch social workers through an online survey which included a Dutch translation of the EBP Process Assessment Scale (EBPPAS), along with…
Ethanol precipitation for purification of recombinant antibodies.
Tscheliessnig, Anne; Satzer, Peter; Hammerschmidt, Nikolaus; Schulz, Henk; Helk, Bernhard; Jungbauer, Alois
2014-10-20
Currently, the golden standard for the purification of recombinant humanized antibodies (rhAbs) from CHO cell culture is protein A chromatography. However, due to increasing rhAbs titers alternative methods have come into focus. A new strategy for purification of recombinant human antibodies from CHO cell culture supernatant based on cold ethanol precipitation (CEP) and CaCl2 precipitation has been developed. This method is based on the cold ethanol precipitation, the process used for purification of antibodies and other components from blood plasma. We proof the applicability of the developed process for four different antibodies resulting in similar yield and purity as a protein A chromatography based process. This process can be further improved using an anion-exchange chromatography in flowthrough mode e.g. a monolith as last step so that residual host cell protein is reduced to a minimum. Beside the ethanol based process, our data also suggest that ethanol could be replaced with methanol or isopropanol. The process is suited for continuous operation. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
Watson, Douglas S; Kerchner, Kristi R; Gant, Sean S; Pedersen, Joseph W; Hamburger, James B; Ortigosa, Allison D; Potgieter, Thomas I
2016-01-01
Tangential flow microfiltration (MF) is a cost-effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial-and-error nature of scale-up. We present an integrated approach leveraging at-line process analytical technology (PAT) and mass balance based modeling to de-risk MF scale-up. Chromatography-based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10-min reverse phase ultra high performance liquid chromatography (RP-UPLC) assay was developed to provide at-line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance-based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale-up of filtration processes and should be suitable for feed-forward and feed-back process control. © 2015 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Holtzman, B. K.; Paté, A.; Paisley, J.; Waldhauser, F.; Repetto, D.; Boschi, L.
2017-12-01
The earthquake process reflects complex interactions of stress, fracture and frictional properties. New machine learning methods reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Our methods are based closely on those developed for music information retrieval and voice recognition, using the spectrogram instead of the waveform directly. Unsupervised learning involves identification of patterns based on differences among signals without any additional information provided to the algorithm. Clustering of 46,000 earthquakes of $0.3
Image object recognition based on the Zernike moment and neural networks
NASA Astrophysics Data System (ADS)
Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu
1998-03-01
This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.
Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong
2015-02-01
Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.
Zheng, Caixian; Zheng, Kun; Shen, Yunming; Wu, Yunyun
2016-01-01
The content related to the quality during life-cycle in operation of medical device includes daily use, repair volume, preventive maintenance, quality control and adverse event monitoring. In view of this, the article aims at discussion on the quality evaluation method of medical devices during their life cycle in operation based on the Analytic Hierarchy Process (AHP). The presented method is proved to be effective by evaluating patient monitors as example. The method presented in can promote and guide the device quality control work, and it can provide valuable inputs to decisions about purchase of new device.
Damage localization by statistical evaluation of signal-processed mode shapes
NASA Astrophysics Data System (ADS)
Ulriksen, M. D.; Damkilde, L.
2015-07-01
Due to their inherent, ability to provide structural information on a local level, mode shapes and t.lieir derivatives are utilized extensively for structural damage identification. Typically, more or less advanced mathematical methods are implemented to identify damage-induced discontinuities in the spatial mode shape signals, hereby potentially facilitating damage detection and/or localization. However, by being based on distinguishing damage-induced discontinuities from other signal irregularities, an intrinsic deficiency in these methods is the high sensitivity towards measurement, noise. The present, article introduces a damage localization method which, compared to the conventional mode shape-based methods, has greatly enhanced robustness towards measurement, noise. The method is based on signal processing of spatial mode shapes by means of continuous wavelet, transformation (CWT) and subsequent, application of a generalized discrete Teager-Kaiser energy operator (GDTKEO) to identify damage-induced mode shape discontinuities. In order to evaluate whether the identified discontinuities are in fact, damage-induced, outlier analysis of principal components of the signal-processed mode shapes is conducted on the basis of T2-statistics. The proposed method is demonstrated in the context, of analytical work with a free-vibrating Euler-Bernoulli beam under noisy conditions.
Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.
Jeong, Sanghyup; Marks, Bradley P; James, Michael K
2017-01-01
Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.
Fault management for data systems
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann
1993-01-01
Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.
Lowering the environmental impact of high-kappa/ metal gate stack surface preparation processes
NASA Astrophysics Data System (ADS)
Zamani, Davoud
ABSTRACT Hafnium based oxides and silicates are promising high-κ dielectrics to replace SiO2 as gate material for state-of-the-art semiconductor devices. However, integrating these new high-κ materials into the existing complementary metal-oxide semiconductor (CMOS) process remains a challenge. One particular area of concern is the use of large amounts of HF during wet etching of hafnium based oxides and silicates. The patterning of thin films of these materials is accomplished by wet etching in HF solutions. The use of HF allows dissolution of hafnium as an anionic fluoride complex. Etch selectivity with respect to SiO2 is achieved by appropriately diluting the solutions and using slightly elevated temperatures. From an ESH point of view, it would be beneficial to develop methods which would lower the use of HF. The first objective of this study is to find new chemistries and developments of new wet etch methods to reduce fluoride consumption during wet etching of hafnium based high-κ materials. Another related issue with major environmental impact is the usage of large amounts of rinsing water for removal of HF in post-etch cleaning step. Both of these require a better understanding of the HF interaction with the high-κ surface during the etching, cleaning, and rinsing processes. During the rinse, the cleaning chemical is removed from the wafers. Ensuring optimal resource usage and cycle time during the rinse requires a sound understanding and quantitative description of the transport effects that dominate the removal rate of the cleaning chemicals from the surfaces. Multiple processes, such as desorption and re-adsorption, diffusion, migration and convection, all factor into the removal rate of the cleaning chemical during the rinse. Any of these processes can be the removal rate limiting process, the bottleneck of the rinse. In fact, the process limiting the removal rate generally changes as the rinse progresses, offering the opportunity to save resources. The second objective of this study is to develop new rinse methods to reduce water and energy usage during rinsing and cleaning of hafnium based high-κ materials in single wafer-cleaning tools. It is necessary to have a metrology method which can study the effect of all process parameters that affect the rinsing by knowing surface concentration of contaminants in patterned hafnium based oxides and silicate wafers. This has been achieved by the introduction of a metrology method at The University of Arizona which monitors the transport of contaminant concentrations inside micro- and nano- structures. This is the only metrology which will be able to provide surface concentration of contaminants inside hafnium based oxides and silicate micro-structures while the rinsing process is taking place. The goal of this research is to study the effect of various process parameters on rinsing of patterned hafnium based oxides and silicate wafers, and modify a metrology method for end point detection.
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis
Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580
ERIC Educational Resources Information Center
Stuebing, Karla K.; Fletcher, Jack M.; Branum-Martin, Lee; Francis, David J.
2012-01-01
This study used simulation techniques to evaluate the technical adequacy of three methods for the identification of specific learning disabilities via patterns of strengths and weaknesses in cognitive processing. Latent and observed data were generated and the decision-making process of each method was applied to assess concordance in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Integrating relationship- and research-based approaches in Australian health promotion practice.
Klinner, Christiane; Carter, Stacy M; Rychetnik, Lucie; Li, Vincy; Daley, Michelle; Zask, Avigdor; Lloyd, Beverly
2015-12-01
We examine the perspectives of health promotion practitioners on their approaches to determining health promotion practice, in particular on the role of research and relationships in this process. Using Grounded Theory methods, we analysed 58 semi-structured interviews with 54 health promotion practitioners in New South Wales, Australia. Practitioners differentiated between relationship-based and research-based approaches as two sources of knowledge to guide health promotion practice. We identify several tensions in seeking to combine these approaches in practice and describe the strategies that participants adopted to manage these tensions. The strategies included working in an evidence-informed rather than evidence-based way, creating new evidence about relationship-based processes and outcomes, adopting 'relationship-based' research and evaluation methods, making research and evaluation useful for communities, building research and evaluation skills and improving collaboration between research and evaluation and programme implementation staff. We conclude by highlighting three systemic factors which could further support the integration of research-based and relationship-based health promotion practices: (i) expanding conceptions of health promotion evidence, (ii) developing 'relationship-based' research methods that enable practitioners to measure complex social processes and outcomes and to facilitate community participation and benefit, and (iii) developing organizational capacity. © The Author (2014). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Clarification of vaccines: An overview of filter based technology trends and best practices.
Besnard, Lise; Fabre, Virginie; Fettig, Michael; Gousseinov, Elina; Kawakami, Yasuhiro; Laroudie, Nicolas; Scanlan, Claire; Pattnaik, Priyabrata
2016-01-01
Vaccines are derived from a variety of sources including tissue extracts, bacterial cells, virus particles, recombinant mammalian, yeast and insect cell produced proteins and nucleic acids. The most common method of vaccine production is based on an initial fermentation process followed by purification. Production of vaccines is a complex process involving many different steps and processes. Selection of the appropriate purification method is critical to achieving desired purity of the final product. Clarification of vaccines is a critical step that strongly impacts product recovery and subsequent downstream purification. There are several technologies that can be applied for vaccine clarification. Selection of a harvesting method and equipment depends on the type of cells, product being harvested, and properties of the process fluids. These techniques include membrane filtration (microfiltration, tangential-flow filtration), centrifugation, and depth filtration (normal flow filtration). Historically vaccine harvest clarification was usually achieved by centrifugation followed by depth filtration. Recently membrane based technologies have gained prominence in vaccine clarification. The increasing use of single-use technologies in upstream processes necessitated a shift in harvest strategies. This review offers a comprehensive view on different membrane based technologies and their application in vaccine clarification, outlines the challenges involved and presents the current state of best practices in the clarification of vaccines. Copyright © 2015 Elsevier Inc. All rights reserved.
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
AOD furnace splash soft-sensor in the smelting process based on improved BP neural network
NASA Astrophysics Data System (ADS)
Ma, Haitao; Wang, Shanshan; Wu, Libin; Yu, Ying
2017-11-01
In view of argon oxygen refining low carbon ferrochrome production process, in the splash of smelting process as the research object, based on splash mechanism analysis in the smelting process , using multi-sensor information fusion and BP neural network modeling techniques is proposed in this paper, using the vibration signal, the audio signal and the flame image signal in the furnace as the characteristic signal of splash, the vibration signal, the audio signal and the flame image signal in the furnace integration and modeling, and reconstruct splash signal, realize the splash soft measurement in the smelting process, the simulation results show that the method can accurately forecast splash type in the smelting process, provide a new method of measurement for forecast splash in the smelting process, provide more accurate information to control splash.
Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang
2017-08-28
Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.
A trust-based recommendation method using network diffusion processes
NASA Astrophysics Data System (ADS)
Chen, Ling-Jiao; Gao, Jian
2018-09-01
A variety of rating-based recommendation methods have been extensively studied including the well-known collaborative filtering approaches and some network diffusion-based methods, however, social trust relations are not sufficiently considered when making recommendations. In this paper, we contribute to the literature by proposing a trust-based recommendation method, named CosRA+T, after integrating the information of trust relations into the resource-redistribution process. Specifically, a tunable parameter is used to scale the resources received by trusted users before the redistribution back to the objects. Interestingly, we find an optimal scaling parameter for the proposed CosRA+T method to achieve its best recommendation accuracy, and the optimal value seems to be universal under several evaluation metrics across different datasets. Moreover, results of extensive experiments on the two real-world rating datasets with trust relations, Epinions and FriendFeed, suggest that CosRA+T has a remarkable improvement in overall accuracy, diversity and novelty. Our work takes a step towards designing better recommendation algorithms by employing multiple resources of social network information.
Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.
Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146
On Intelligent Design and Planning Method of Process Route Based on Gun Breech Machining Process
NASA Astrophysics Data System (ADS)
Hongzhi, Zhao; Jian, Zhang
2018-03-01
The paper states an approach of intelligent design and planning of process route based on gun breech machining process, against several problems, such as complex machining process of gun breech, tedious route design and long period of its traditional unmanageable process route. Based on gun breech machining process, intelligent design and planning system of process route are developed by virtue of DEST and VC++. The system includes two functional modules--process route intelligent design and its planning. The process route intelligent design module, through the analysis of gun breech machining process, summarizes breech process knowledge so as to complete the design of knowledge base and inference engine. And then gun breech process route intelligently output. On the basis of intelligent route design module, the final process route is made, edited and managed in the process route planning module.
Han, Lianghao; Dong, Hua; McClelland, Jamie R; Han, Liangxiu; Hawkes, David J; Barratt, Dean C
2017-07-01
This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Palacios, Julia A; Minin, Vladimir N
2013-03-01
Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.
The research on construction and application of machining process knowledge base
NASA Astrophysics Data System (ADS)
Zhao, Tan; Qiao, Lihong; Qie, Yifan; Guo, Kai
2018-03-01
In order to realize the application of knowledge in machining process design, from the perspective of knowledge in the application of computer aided process planning(CAPP), a hierarchical structure of knowledge classification is established according to the characteristics of mechanical engineering field. The expression of machining process knowledge is structured by means of production rules and the object-oriented methods. Three kinds of knowledge base models are constructed according to the representation of machining process knowledge. In this paper, the definition and classification of machining process knowledge, knowledge model, and the application flow of the process design based on the knowledge base are given, and the main steps of the design decision of the machine tool are carried out as an application by using the knowledge base.
Wang, Yong-Chun; Lin, Cong-Bin; Su, Jian-Jia; Ru, Ying-Ming; Wu, Qiao; Chen, Zhao-Bin; Mao, Bing-Wei; Tian, Zhao-Wu
2011-06-15
In this paper, we present an electrochemically driven large amplitude pH alteration method based on a serial electrolytic cell involving a hydrogen permeable bifacial working electrode such as Pd thin foil. The method allows solution pH to be changed periodically up to ±4~5 units without additional alteration of concentration and/or composition of the system. Application to the acid-base driven cyclic denaturation and renaturation of 290 bp DNA fragments is successfully demonstrated with in situ real-time UV spectroscopic characterization. Electrophoretic analysis confirms that the denaturation and renaturation processes are reversible without degradation of the DNA. The serial electrolytic cell based electrochemical pH alteration method presented in this work would promote investigations of a wide variety of potential-dependent processes and techniques.
ERIC Educational Resources Information Center
Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.
2006-01-01
Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…
Xu, Weichen; Jimenez, Rod Brian; Mowery, Rachel; Luo, Haibin; Cao, Mingyan; Agarwal, Nitin; Ramos, Irina; Wang, Xiangyang; Wang, Jihong
2017-10-01
During manufacturing and storage process, therapeutic proteins are subject to various post-translational modifications (PTMs), such as isomerization, deamidation, oxidation, disulfide bond modifications and glycosylation. Certain PTMs may affect bioactivity, stability or pharmacokinetics and pharmacodynamics profile and are therefore classified as potential critical quality attributes (pCQAs). Identifying, monitoring and controlling these PTMs are usually key elements of the Quality by Design (QbD) approach. Traditionally, multiple analytical methods are utilized for these purposes, which is time consuming and costly. In recent years, multi-attribute monitoring methods have been developed in the biopharmaceutical industry. However, these methods combine high-end mass spectrometry with complicated data analysis software, which could pose difficulty when implementing in a quality control (QC) environment. Here we report a multi-attribute method (MAM) using a Quadrupole Dalton (QDa) mass detector to selectively monitor and quantitate PTMs in a therapeutic monoclonal antibody. The result output from the QDa-based MAM is straightforward and automatic. Evaluation results indicate this method provides comparable results to the traditional assays. To ensure future application in the QC environment, this method was qualified according to the International Conference on Harmonization (ICH) guideline and applied in the characterization of drug substance and stability samples. The QDa-based MAM is shown to be an extremely useful tool for product and process characterization studies that facilitates facile understanding of process impact on multiple quality attributes, while being QC friendly and cost-effective.
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-09-10
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-01-01
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422
Index cost estimate based BIM method - Computational example for sports fields
NASA Astrophysics Data System (ADS)
Zima, Krzysztof
2017-07-01
The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.
Automated and unsupervised detection of malarial parasites in microscopic images.
Purwar, Yashasvi; Shah, Sirish L; Clarke, Gwen; Almugairi, Areej; Muehlenbachs, Atis
2011-12-13
Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis) and prone to human error (leading to erroneous diagnosis), even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method provides a consistent and robust way of generating the parasite clearance curves.
NASA Astrophysics Data System (ADS)
Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde
2018-03-01
This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.
Monitoring of waste disposal in deep geological formations
NASA Astrophysics Data System (ADS)
German, V.; Mansurov, V.
2003-04-01
In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni
2016-03-01
The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.
Culturally Based Intervention Development: The Case of Latino Families Dealing with Schizophrenia
ERIC Educational Resources Information Center
Barrio, Concepcion; Yamada, Ann-Marie
2010-01-01
Objectives: This article describes the process of developing a culturally based family intervention for Spanish-speaking Latino families with a relative diagnosed with schizophrenia. Method: Our iterative intervention development process was guided by a cultural exchange framework and based on findings from an ethnographic study. We piloted this…
Guiding Students through the Jungle of Research-Based Literature
ERIC Educational Resources Information Center
Williams, Sherie
2005-01-01
Undergraduate students of today often lack the ability to effectively process research-based literature. In order to offer education students the most up-to-date methods, research-based literature must be considered. Hence a dilemma is born as to whether professors should discontinue requiring the processing of this type of information or teach…
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Implementation and optimization of ultrasound signal processing algorithms on mobile GPU
NASA Astrophysics Data System (ADS)
Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong
2014-03-01
A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.
Wavelet-Based Processing for Fiber Optic Sensing Systems
NASA Technical Reports Server (NTRS)
Hamory, Philip J. (Inventor); Parker, Allen R., Jr. (Inventor)
2016-01-01
The present invention is an improved method of processing conglomerate data. The method employs a Triband Wavelet Transform that decomposes and decimates the conglomerate signal to obtain a final result. The invention may be employed to improve performance of Optical Frequency Domain Reflectometry systems.
Rogberg-Muñoz, Andrés; Posik, Diego M; Rípoli, María V; Falomir Lockhart, Agustín H; Peral-García, Pilar; Giovambattista, Guillermo
2013-04-01
The value of the traceability and labeling of food is attributable to two main aspects: health safety and/or product or process certification. The identification of the species related to meat production is still a major concern for economic, religious and health reasons. Many approaches and technologies have been used for species identification in animal feedstuff and food. The early methods for meat products identification include physical, anatomical, histological and chemical. Since 1970, a variety of methods were developed, these include electrophoresis (i.e. isoelectrofocusing), chromatography (i.e. HPLC), immunological techniques (i.e. ELISA), Nuclear Magnetic Resonance, Mass Spectrometry and PCR (DNA and RNA based methods). The recent patents on species detection in animal feedstuffs, raw meat and meat processed products, listed in this work, are mainly based on monoclonal antibodies and PCR, especially RT-PCR. The new developments under research are looking for more sensible, specific, less time consuming and quantitatively detection methods, which can be used in highly processed or heated treated meat food.
NASA Astrophysics Data System (ADS)
Huang, Shengzhou; Li, Mujun; Shen, Lianguan; Qiu, Jinfeng; Zhou, Youquan
2018-03-01
A novel fabrication method for high quality aspheric microlens array (MLA) was developed by combining the dose-modulated DMD-based lithography and surface thermal reflow process. In this method, the complex shape of aspheric microlens is pre-modeled via dose modulation in a digital micromirror device (DMD) based maskless projection lithography. And the dose modulation mainly depends on the distribution of exposure dose of photoresist. Then the pre-shaped aspheric microlens is polished by a following non-contact thermal reflow (NCTR) process. Different from the normal process, the reflow process here is investigated to improve the surface quality while keeping the pre-modeled shape unchanged, and thus will avoid the difficulties in generating the aspheric surface during reflow. Fabrication of a designed aspheric MLA with this method was demonstrated in experiments. Results showed that the obtained aspheric MLA was good in both shape accuracy and surface quality. The presented method may be a promising approach in rapidly fabricating high quality aspheric microlens with complex surface.
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
NASA Astrophysics Data System (ADS)
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits
NASA Astrophysics Data System (ADS)
Moradi, Behzad; Mirzaei, Abdolreza
2016-11-01
A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.
Recycling of Exhaust Batteries in Lead-Foam Electrodes
NASA Astrophysics Data System (ADS)
Costanza, Girolamo; Tata, Maria Elisa
Lead and lead-alloy foams have been investigated in this research. In particular low-cost techniques for the direct production of lead-based electrodes have been analyzed and discussed in this work. The relevance of the main process parameters (powder compacting pressure, granulometry, base metal composition, sintering temperature and time) have been focused and the effect on foam morphology has been discussed too. In particular "Sintering and Dissolution Process" (SDP) and "Replication Process" (RP) have been employed and suitable modified. Both spherical urea and NaCl have been adopted in the SDP method. In the replication process it has been evidenced that the viscosity of the melt is fundamental. Furthermore the research examines lead recovery and recycling of exhaust batteries into foam-based electrodes. A novel method for the direct conversion of Pb scrap into lead foam is discussed too.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
New horizons in selective laser sintering surface roughness characterization
NASA Astrophysics Data System (ADS)
Vetterli, M.; Schmid, M.; Knapp, W.; Wegener, K.
2017-12-01
Powder-based additive manufacturing of polymers and metals has evolved from a prototyping technology to an industrial process for the fabrication of small to medium series of complex geometry parts. Unfortunately due to the processing of powder as a basis material and the successive addition of layers to produce components, a significant surface roughness inherent to the process has been observed since the first use of such technologies. A novel characterization method based on an elastomeric pad coated with a reflective layer, the Gelsight, was found to be reliable and fast to characterize surfaces processed by selective laser sintering (SLS) of polymers. With help of this method, a qualitative and quantitative investigation of SLS surfaces is feasible. Repeatability and reproducibility investigations are performed for both 2D and 3D areal roughness parameters. Based on the good results, the Gelsight is used for the optimization of vertical SLS surfaces. A model built on laser scanning parameters is proposed and after confirmation could achieve a roughness reduction of 10% based on the S q parameter. The Gelsight could be successfully identified as a fast, reliable and versatile surface topography characterization method as it applies to all kind of surfaces.
Use of focused ultrasonication in activity-based profiling of deubiquitinating enzymes in tissue.
Nanduri, Bindu; Shack, Leslie A; Rai, Aswathy N; Epperson, William B; Baumgartner, Wes; Schmidt, Ty B; Edelmann, Mariola J
2016-12-15
To develop a reproducible tissue lysis method that retains enzyme function for activity-based protein profiling, we compared four different methods to obtain protein extracts from bovine lung tissue: focused ultrasonication, standard sonication, mortar & pestle method, and homogenization combined with standard sonication. Focused ultrasonication and mortar & pestle methods were sufficiently effective for activity-based profiling of deubiquitinases in tissue, and focused ultrasonication also had the fastest processing time. We used focused-ultrasonicator for subsequent activity-based proteomic analysis of deubiquitinases to test the compatibility of this method in sample preparation for activity-based chemical proteomics. Copyright © 2016 Elsevier Inc. All rights reserved.
Constructing a Geology Ontology Using a Relational Database
NASA Astrophysics Data System (ADS)
Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.
2013-12-01
In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances relationship. Based on a Quaternary database of downtown of Foshan city, Guangdong Province, in Southern China, a geological ontology was constructed using the proposed method. To measure the maintenance of semantics in the conversation process and the results, an inverse mapping from the ontology to a relational database was tested based on a proposed conversation rule. The comparison of schema and entities and the reduction of tables between the inverse database and the original database illustrated that the proposed method retains the semantic information well during the conversation process. An application for abstracting sandstone information showed that semantic relationships among concepts in the geological database were successfully reorganized in the constructed ontology. Key words: geological ontology; geological spatial database; multiple inheritance; OWL Acknowledgement: This research is jointly funded by the Specialized Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20100171120001), NSFC (41102207) and the Fundamental Research Funds for the Central Universities (12lgpy19).
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
Explosive boiling of liquid nitrogen
NASA Astrophysics Data System (ADS)
Nakoryakov, V. E.; Tsoy, A. N.; Mezentsev, I. V.; Meleshkin, A. V.
2014-12-01
The present paper deals with experimental investigation of processes that occur when injecting a cryogenic fluid into water. The optical recording of the process of injection of a jet of liquid nitrogen into water has revealed the structure and the stages of this process. The results obtained can be used when studying a new method for producing gas hydrates based on the shock-wave method.
Su-Huan, Kow; Fahmi, Muhammad Ridwan; Abidin, Che Zulzikrami Azner; Soon-An, Ong
2016-11-01
Advanced oxidation processes (AOPs) are of special interest in treating landfill leachate as they are the most promising procedures to degrade recalcitrant compounds and improve the biodegradability of wastewater. This paper aims to refresh the information base of AOPs and to discover the research gaps of AOPs in landfill leachate treatment. A brief overview of mechanisms involving in AOPs including ozone-based AOPs, hydrogen peroxide-based AOPs and persulfate-based AOPs are presented, and the parameters affecting AOPs are elaborated. Particularly, the advancement of AOPs in landfill leachate treatment is compared and discussed. Landfill leachate characterization prior to method selection and method optimization prior to treatment are necessary, as the performance and practicability of AOPs are influenced by leachate matrixes and treatment cost. More studies concerning the scavenging effects of leachate matrixes towards AOPs, as well as the persulfate-based AOPs in landfill leachate treatment, are necessary in the future.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
The new analysis method of PWQ in the DRAM pattern
NASA Astrophysics Data System (ADS)
Han, Daehan; Chang, Jinman; Kim, Taeheon; Lee, Kyusun; Kim, Yonghyeon; Kang, Jinyoung; Hong, Aeran; Choi, Bumjin; Lee, Joosung; Kim, Hyoung Jun; Lee, Kweonjae; Hong, Hyoungsun; Jin, Gyoyoung
2016-03-01
In a sub 2Xnm node process, the feedback of pattern weak points is more and more significant. Therefore, it is very important to extract the systemic defect in Double Patterning Technology(DPT), however, it is impossible to predict exact systemic defect at the recent photo simulation tool.[1] Therefore, the method of Process Window Qualification (PWQ) is very serious and essential these days. Conventional PWQ methods are die to die image comparison by using an e-beam or bright field machine. Results are evaluated by the person, who reviews the images, in some cases. However, conventional die to die comparison method has critical problem. If reference die and comparison die have same problem, such as both of dies have pattern problems, the issue patterns are not detected by current defect detecting approach. Aside from the inspection accuracy, reviewing the wafer requires much effort and time to justify the genuine issue patterns. Therefore, our company adopts die to data based matching PWQ method that is using NGR machine. The main features of the NGR are as follows. First, die to data based matching, second High speed, finally massive data were used for evaluation of pattern inspection.[2] Even though our die to data based matching PWQ method measures the mass data, our margin decision process is based on image shape. Therefore, it has some significant problems. First, because of the long analysis time, the developing period of new device is increased. Moreover, because of the limitation of resources, it may not examine the full chip area. Consequently, the result of PWQ weak points cannot represent the all the possible defects. Finally, since the PWQ margin is not decided by the mathematical value, to make the solid definition of killing defect is impossible. To overcome these problems, we introduce a statistical values base process window qualification method that increases the accuracy of process margin and reduces the review time. Therefore, it is possible to see the genuine margin of the critical pattern issue which we cannot see on our conventional PWQ inspection; hence we can enhance the accuracy of PWQ margin.
Usability Evaluation of a Web-Based Learning System
ERIC Educational Resources Information Center
Nguyen, Thao
2012-01-01
The paper proposes a contingent, learner-centred usability evaluation method and a prototype tool of such systems. This is a new usability evaluation method for web-based learning systems using a set of empirically-supported usability factors and can be done effectively with limited resources. During the evaluation process, the method allows for…
A new window of opportunity to reject process-based biotechnology regulation
Marchant, Gary E; Stevens, Yvonne A
2015-01-01
ABSTRACT. The question of whether biotechnology regulation should be based on the process or the product has long been debated, with different jurisdictions adopting different approaches. The European Union has adopted a process-based approach, Canada has adopted a product-based approach, and the United States has implemented a hybrid system. With the recent proliferation of new methods of genetic modification, such as gene editing, process-based regulatory systems, which are premised on a binary system of transgenic and conventional approaches, will become increasingly obsolete and unsustainable. To avoid unreasonable, unfair and arbitrary results, nations that have adopted process-based approaches will need to migrate to a product-based approach that considers the novelty and risks of the individual trait, rather than the process by which that trait was produced. This commentary suggests some approaches for the design of such a product-based approach. PMID:26930116
A new window of opportunity to reject process-based biotechnology regulation.
Marchant, Gary E; Stevens, Yvonne A
2015-01-01
The question of whether biotechnology regulation should be based on the process or the product has long been debated, with different jurisdictions adopting different approaches. The European Union has adopted a process-based approach, Canada has adopted a product-based approach, and the United States has implemented a hybrid system. With the recent proliferation of new methods of genetic modification, such as gene editing, process-based regulatory systems, which are premised on a binary system of transgenic and conventional approaches, will become increasingly obsolete and unsustainable. To avoid unreasonable, unfair and arbitrary results, nations that have adopted process-based approaches will need to migrate to a product-based approach that considers the novelty and risks of the individual trait, rather than the process by which that trait was produced. This commentary suggests some approaches for the design of such a product-based approach.
Method Engineering: A Service-Oriented Approach
NASA Astrophysics Data System (ADS)
Cauvet, Corine
In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Stability Analysis of Radial Turning Process for Superalloys
NASA Astrophysics Data System (ADS)
Jiménez, Alberto; Boto, Fernando; Irigoien, Itziar; Sierra, Basilio; Suarez, Alfredo
2017-09-01
Stability detection in machining processes is an essential component for the design of efficient machining processes. Automatic methods are able to determine when instability is happening and prevent possible machine failures. In this work a variety of methods are proposed for detecting stability anomalies based on the measured forces in the radial turning process of superalloys. Two different methods are proposed to determine instabilities. Each one is tested on real data obtained in the machining of Waspalloy, Haynes 282 and Inconel 718. Experimental data, in both Conventional and High Pressure Coolant (HPC) environments, are set in four different states depending on materials grain size and Hardness (LGA, LGS, SGA and SGS). Results reveal that PCA method is useful for visualization of the process and detection of anomalies in online processes.
Community Currency Trading Method through Partial Transaction Intermediary Process
NASA Astrophysics Data System (ADS)
Kido, Kunihiko; Hasegawa, Seiichi; Komoda, Norihisa
A community currency is local money that is issued by local governments or Non-Profit Organization (NPO) to support social services. The purpose of introducing community currencies is to regenerate communities by fostering mutual aids among community members. In this paper, we propose a community currency trading method through partial intermediary process, under operational environments without introducing coordinators all the time. In this method, coordinators perform coordination between service users and service providers during several months from the start point of transactions. After the period of coordination, participants spontaneously make transactions based on their trust area and a trust evaluation method based on the number of provided services and complaint information. This method is especially effective to communities with close social networks and low trustworthiness. The proposed method is evaluated through multi-agent simulation.
Digital Signal Processing Based on a Clustering Algorithm for Ir/Au TES Microcalorimeter
NASA Astrophysics Data System (ADS)
Zen, N.; Kunieda, Y.; Takahashi, H.; Hiramoto, K.; Nakazawa, M.; Fukuda, D.; Ukibe, M.; Ohkubo, M.
2006-02-01
In recent years, cryogenic microcalorimeters using their superconducting transition edge have been under development for possible application to the research for astronomical X-ray observations. To improve the energy resolution of superconducting transition edge sensors (TES), several correction methods have been developed. Among them, a clustering method based on digital signal processing has recently been proposed. In this paper, we applied the clustering method to Ir/Au bilayer TES. This method resulted in almost a 10% improvement in the energy resolution. Conversely, from the point of view of imaging X-ray spectroscopy, we applied the clustering method to pixellated Ir/Au-TES devices. We will thus show how a clustering method which sorts signals by their shapes is also useful for position identification
Single Wall Carbon Nanotube Alignment Mechanisms for Non-Destructive Evaluation
NASA Technical Reports Server (NTRS)
Hong, Seunghun
2002-01-01
As proposed in our original proposal, we developed a new innovative method to assemble millions of single wall carbon nanotube (SWCNT)-based circuit components as fast as conventional microfabrication processes. This method is based on surface template assembly strategy. The new method solves one of the major bottlenecks in carbon nanotube based electrical applications and, potentially, may allow us to mass produce a large number of SWCNT-based integrated devices of critical interests to NASA.
Chauvenet, B; Bobin, C; Bouchard, J
2017-12-01
Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hively, Lee M [Philadelphia, TN
2011-07-12
The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.
NASA Astrophysics Data System (ADS)
Bezmaternykh, P. V.; Nikolaev, D. P.; Arlazarov, V. L.
2018-04-01
Textual blocks rectification or slant correction is an important stage of document image processing in OCR systems. This paper considers existing methods and introduces an approach for the construction of such algorithms based on Fast Hough Transform analysis. A quality measurement technique is proposed and obtained results are shown for both printed and handwritten textual blocks processing as a part of an industrial system of identity documents recognition on mobile devices.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
K-space data processing for magnetic resonance elastography (MRE).
Corbin, Nadège; Breton, Elodie; de Mathelin, Michel; Vappou, Jonathan
2017-04-01
Magnetic resonance elastography (MRE) requires substantial data processing based on phase image reconstruction, wave enhancement, and inverse problem solving. The objective of this study is to propose a new, fast MRE method based on MR raw data processing, particularly adapted to applications requiring fast MRE measurement or high elastogram update rate. The proposed method allows measuring tissue elasticity directly from raw data without prior phase image reconstruction and without phase unwrapping. Experimental feasibility is assessed both in a gelatin phantom and in the liver of a porcine model in vivo. Elastograms are reconstructed with the raw MRE method and compared to those obtained using conventional MRE. In a third experiment, changes in elasticity are monitored in real-time in a gelatin phantom during its solidification by using both conventional MRE and raw MRE. The raw MRE method shows promising results by providing similar elasticity values to the ones obtained with conventional MRE methods while decreasing the number of processing steps and circumventing the delicate step of phase unwrapping. Limitations of the proposed method are the influence of the magnitude on the elastogram and the requirement for a minimum number of phase offsets. This study demonstrates the feasibility of directly reconstructing elastograms from raw data.
Selection of Construction Methods: A Knowledge-Based Approach
Skibniewski, Miroslaw
2013-01-01
The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925
Collaborative simulation method with spatiotemporal synchronization process control
NASA Astrophysics Data System (ADS)
Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian
2016-10-01
When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Keever-Taylor, Carolyn A; Slaper-Cortenbach, Ineke; Celluzzi, Christina; Loper, Kathy; Aljurf, Mahmoud; Schwartz, Joseph; Mcgrath, Eoin; Eldridge, Paul
2015-12-01
Methods for processing products used for hematopoietic progenitor cell (HPC) transplantation must ensure their safety and efficacy. Personnel training and ongoing competency assessment is critical to this goal. Here we present results from a global survey of methods used by a diverse array of cell processing facilities for the initial training and ongoing competency assessment of key personnel. The Alliance for Harmonisation of Cellular Therapy Accreditation (AHCTA) created a survey to identify facility type, location, activity, personnel, and methods used for training and competency. A survey link was disseminated through organizations represented in AHCTA to processing facilities worldwide. Responses were tabulated and analyzed as a percentage of total responses and as a percentage of response by region group. Most facilities were based at academic medical centers or hospitals. Facilities with a broad range of activity, product sources and processing procedures were represented. Facilities reported using a combination of training and competency methods. However, some methods predominated. Cellular sources for training differed for training versus competency and also differed based on frequency of procedures performed. Most facilities had responsibilities for procedures in addition to processing for which training and competency methods differed. Although regional variation was observed, training and competency requirements were generally consistent. Survey data showed the use of a variety of training and competency methods but some methods predominated, suggesting their utility. These results could help new and established facilities in making decisions for their own training and competency programs. Copyright © 2015 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Fourier analysis and signal processing by use of the Moebius inversion formula
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.
1990-01-01
A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2016-02-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.
Research on detection method of UAV obstruction based on binocular vision
NASA Astrophysics Data System (ADS)
Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao
2018-04-01
For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.
von Bargen, Christoph; Brockmeyer, Jens; Humpf, Hans-Ulrich
2014-10-01
Fraudulent blending of food products with meat from undeclared species is a problem on a global scale, as exemplified by the European horse meat scandal in 2013. Routinely used methods such as ELISA and PCR can suffer from limited sensitivity or specificity when processed food samples are analyzed. In this study, we have developed an optimized method for the detection of horse and pork in different processed food matrices using MRM and MRM(3) detection of species-specific tryptic marker peptides. Identified marker peptides were sufficiently stable to resist thermal processing of different meat products and thus allow the sensitive and specific detection of pork or horse in processed food down to 0.24% in a beef matrix system. In addition, we were able to establish a rapid 2-min extraction protocol for the efficient protein extraction from processed food using high molar urea and thiourea buffers. Together, we present here the specific and sensitive detection of horse and pork meat in different processed food matrices using MRM-based detection of marker peptides. Notably, prefractionation of proteins using 2D-PAGE or off-gel fractionation is not necessary. The presented method is therefore easily applicable in analytical routine laboratories without dedicated proteomics background.
Developing Emotion-Based Case Formulations: A Research-Informed Method.
Pascual-Leone, Antonio; Kramer, Ueli
2017-01-01
New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optical enhancing durable anti-reflective coating
Maghsoodi, Sina; Varadarajan, Aravamuthan; Movassat, Meisam
2016-07-05
Disclosed herein are polysilsesquioxane based anti-reflective coating (ARC) compositions, methods of preparation, and methods of deposition on a substrate. In embodiments, the polysilsesquioxane of this disclosure is prepared in a two-step process of acid catalyzed hydrolysis of organoalkoxysilane followed by addition of tetralkoxysilane that generates silicone polymers with >40 mol % silanol based on Si-NMR. These high silanol siloxane polymers are stable and have a long shelf-life in the polar organic solvents at room temperature. Also disclosed are low refractive index ARC made from these compositions with and without additives such as porogens, templates, Si--OH condensation catalyst and/or nanofillers. Also disclosed are methods and apparatus for applying coatings to flat substrates including substrate pre-treatment processes, coating processes including flow coating and roll coating, and coating curing processes including skin-curing using hot-air knives. Also disclosed are coating compositions and formulations for highly tunable, durable, highly abrasion-resistant functionalized anti-reflective coatings.
High gain durable anti-reflective coating
Maghsoodi, Sina; Brophy, Brenor L.; Colson, Thomas E.; Gonsalves, Peter R.; Abrams, Ze'ev R.
2016-07-26
Disclosed herein are polysilsesquioxane-based anti-reflective coating (ARC) compositions, methods of preparation, and methods of deposition on a substrate. In one embodiment, the polysilsesquioxane of this disclosure is prepared in a two-step process of acid catalyzed hydrolysis of organoalkoxysilane followed by addition of tetralkoxysilane that generates silicone polymers with >40 mol % silanol based on Si-NMR. These high silanol siloxane polymers are stable and have a long shelf-life in polar organic solvents at room temperature. Also disclosed are low refractive index ARC made from these compositions with and without additives such as porogens, templates, thermal radical initiator, photo radical initiators, crosslinkers, Si--OH condensation catalyst and nano-fillers. Also disclosed are methods and apparatus for applying coatings to flat substrates including substrate pre-treatment processes, coating processes and coating curing processes including skin-curing using hot-air knives. Also disclosed are coating compositions and formulations for highly tunable, durable, highly abrasion-resistant functionalized anti-reflective coatings.
Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B; Sturm, Benjamin W
2014-11-11
A scintillator radiation detector system according to one embodiment includes a scintillator; and a processing device for processing pulse traces corresponding to light pulses from the scintillator, wherein pulse digitization is used to improve energy resolution of the system. A scintillator radiation detector system according to another embodiment includes a processing device for fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times and performing a direct integration of fit parameters. A method according to yet another embodiment includes processing pulse traces corresponding to light pulses from a scintillator, wherein pulse digitization is used to improve energy resolution of the system. A method in a further embodiment includes fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times; and performing a direct integration of fit parameters. Additional systems and methods are also presented.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning.
Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).
A parallel implementation of a multisensor feature-based range-estimation method
NASA Technical Reports Server (NTRS)
Suorsa, Raymond E.; Sridhar, Banavar
1993-01-01
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.
Towards Plasma-Based Water Purification: Challenges and Prospects for the Future
NASA Astrophysics Data System (ADS)
Foster, John
2016-10-01
Freshwater scarcity derived from climate change, pollution, and over-development has led to serious consideration for water reuse. Advanced water treatment technologies will be required to process wastewater slated for reuse. One new and emerging technology that could potentially address the removal micropollutants in both drinking water as well as wastewater slated for reuse is plasma-based water purification. Plasma in contact with liquid water generates reactive species that attack and ultimately mineralize organic contaminants in solution. This interaction takes place in a boundary layer centered at the plasma-liquid interface. An understanding of the physical processes taking place at this interface, though poorly understood, is key to the optimization of plasma water purifiers. High electric field conditions, large density gradients, plasma-driven chemistries, and fluid dynamic effects prevail in this multiphase region. The region is also the source function for longer-lived reactive species that ultimately treat the water. Here, we review the need for advanced water treatment methods and in the process, make the case for plasma-based methods. Additionally, we survey the basic methods of interacting plasma with liquid water (including a discussion of breakdown processes in water), the current state of understanding of the physical processes taking place at the plasma-liquid interface, and the role that these processes play in water purification. The development of diagnostics usable in this multiphase environment along modeling efforts aimed at elucidating physical processes taking place at the interface are also detailed. Key experiments that demonstrate the capability of plasma-based water treatment are also reviewed. The technical challenges to the implementation of plasma-based water reactors are also discussed. NSF CBET 1336375 and DOE DE-SC0001939.
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method
Zhang, Tingting; Kou, S. C.
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.
Zhang, Tingting; Kou, S C
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.
Jong, Stephanie T; Brown, Helen Elizabeth; Croxson, Caroline H D; Wilkinson, Paul; Corder, Kirsten L; van Sluijs, Esther M F
2018-05-21
Process evaluations are critical for interpreting and understanding outcome trial results. By understanding how interventions function across different settings, process evaluations have the capacity to inform future dissemination of interventions. The complexity of Get others Active (GoActive), a 12-week, school-based physical activity intervention implemented in eight schools, highlights the need to investigate how implementation is achieved across a variety of school settings. This paper describes the mixed methods GoActive process evaluation protocol that is embedded within the outcome evaluation. In this detailed process evaluation protocol, we describe the flexible and pragmatic methods that will be used for capturing the process evaluation data. A mixed methods design will be used for the process evaluation, including quantitative data collected in both the control and intervention arms of the GoActive trial, and qualitative data collected in the intervention arm. Data collection methods will include purposively sampled, semi-structured interviews and focus group interviews, direct observation, and participant questionnaires (completed by students, teachers, older adolescent mentors, and local authority-funded facilitators). Data will be analysed thematically within and across datasets. Overall synthesis of findings will address the process of GoActive implementation, and through which this process affects outcomes, with careful attention to the context of the school environment. This process evaluation will explore the experience of participating in GoActive from the perspectives of key groups, providing a greater understanding of the acceptability and process of implementation of the intervention across the eight intervention schools. This will allow for appraisal of the intervention's conceptual base, inform potential dissemination, and help optimise post-trial sustainability. The process evaluation will also assist in contextualising the trial effectiveness results with respect to how the intervention may or may not have worked and, if it was found to be effective, what might be required for it to be sustained in the 'real world'. Furthermore, it will offer suggestions for the development and implementation of future initiatives to promote physical activity within schools. ISRCTN, ISRCTN31583496 . Registered on 18 February 2014.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
Seamless contiguity method for parallel segmentation of remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling
2015-12-01
Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Processing methods for photoacoustic Doppler flowmetry with a clinical ultrasound scanner
NASA Astrophysics Data System (ADS)
Bücking, Thore M.; van den Berg, Pim J.; Balabani, Stavroula; Steenbergen, Wiendelt; Beard, Paul C.; Brunker, Joanna
2018-02-01
Photoacoustic flowmetry (PAF) based on time-domain cross correlation of photoacoustic signals is a promising technique for deep tissue measurement of blood flow velocity. Signal processing has previously been developed for single element transducers. Here, the processing methods for acoustic resolution PAF using a clinical ultrasound transducer array are developed and validated using a 64-element transducer array with a -6 dB detection band of 11 to 17 MHz. Measurements were performed on a flow phantom consisting of a tube (580 μm inner diameter) perfused with human blood flowing at physiological speeds ranging from 3 to 25 mm / s. The processing pipeline comprised: image reconstruction, filtering, displacement detection, and masking. High-pass filtering and background subtraction were found to be key preprocessing steps to enable accurate flow velocity estimates, which were calculated using a cross-correlation based method. In addition, the regions of interest in the calculated velocity maps were defined using a masking approach based on the amplitude of the cross-correlation functions. These developments enabled blood flow measurements using a transducer array, bringing PAF one step closer to clinical applicability.
NASA Astrophysics Data System (ADS)
Avérous, Luc; Pollet, Eric
2016-03-01
In the last years, biopolymers have attracted great attention. It is for instance the case of chitosan, a linear polysaccharide. It is a deacetylated derivative of chitin, which is the second most abundant polysaccharide found in nature after cellulose. Chitosan has been found to be nontoxic, biodegradable, biofunctional, and biocompatible in addition to having antimicrobial and antifungal properties, and thus has a great potential for environmental (packaging,) or biomedical applications.For preparing chitosan-based materials, only solution casting or similar methods have been used in all the past studies. Solution casting have the disadvantage in low efficiency and difficulty in scaling-up towards industrial applications. Besides, a great amount of environmentally unfriendly chemical solvents are used and released to the environment in this method. The reason for not using a melt processing method like extrusion or kneading in the past studies is that chitosan, like many other polysaccharides such as starch, has very low thermal stability and degrade prior to melting. Therefore, even if the melt processing method is more convenient and highly preferred for industrial production, its adaptation for polysaccharide-based materials remains very difficult. However, our recently published studies has demonstrated the successful use of an innovative melt processing method (internal mixer, extrusion,) as an alternative route to solution casting, for preparing materials based on thermoplastic chitosan. These promising thermoplastic materials, obtained by melt processing, have been the main topic of recent international projects, with partners from different countries Multiphase systems based on various renewable plasticizers have been elaborated and studied. Besides, different blends, and nano-biocomposites based on nanoclays, have been elaborated and fully analyzed. The initial consortium of this vast project was based on an international consortium (Canada, Australia, France). This project is currently ongoing and open, with new international academic partners (Mexico, Brazil and Spain).
Visualization of DNA in highly processed botanical materials.
Lu, Zhengfei; Rubinsky, Maria; Babajanian, Silva; Zhang, Yanjun; Chang, Peter; Swanson, Gary
2018-04-15
DNA-based methods have been gaining recognition as a tool for botanical authentication in herbal medicine; however, their application in processed botanical materials is challenging due to the low quality and quantity of DNA left after extensive manufacturing processes. The low amount of DNA recovered from processed materials, especially extracts, is "invisible" by current technology, which has casted doubt on the presence of amplifiable botanical DNA. A method using adapter-ligation and PCR amplification was successfully applied to visualize the "invisible" DNA in botanical extracts. The size of the "invisible" DNA fragments in botanical extracts was around 20-220 bp compared to fragments of around 600 bp for the more easily visualized DNA in botanical powders. This technique is the first to allow characterization and visualization of small fragments of DNA in processed botanical materials and will provide key information to guide the development of appropriate DNA-based botanical authentication methods in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parametric synthesis of a robust controller on a base of mathematical programming method
NASA Astrophysics Data System (ADS)
Khozhaev, I. V.; Gayvoronskiy, S. A.; Ezangina, T. A.
2018-05-01
Considered paper is dedicated to deriving sufficient conditions, linking root indices of robust control quality with coefficients of interval characteristic polynomial, on the base of mathematical programming method. On the base of these conditions, a method of PI- and PID-controllers, providing aperiodic transient process with acceptable stability degree and, subsequently, acceptable setting time, synthesis was developed. The method was applied to a problem of synthesizing a controller for a depth control system of an unmanned underwater vehicle.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
A new web-based framework development for fuzzy multi-criteria group decision-making.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
2016-01-01
Fuzzy multi-criteria group decision making (FMCGDM) process is usually used when a group of decision-makers faces imprecise data or linguistic variables to solve the problems. However, this process contains many methods that require many time-consuming calculations depending on the number of criteria, alternatives and decision-makers in order to reach the optimal solution. In this study, a web-based FMCGDM framework that offers decision-makers a fast and reliable response service is proposed. The proposed framework includes commonly used tools for multi-criteria decision-making problems such as fuzzy Delphi, fuzzy AHP and fuzzy TOPSIS methods. The integration of these methods enables taking advantages of the strengths and complements each method's weakness. Finally, a case study of location selection for landfill waste in Morocco is performed to demonstrate how this framework can facilitate decision-making process. The results demonstrate that the proposed framework can successfully accomplish the goal of this study.
González Bardeci, Nicolás; Angiolini, Juan Francisco; De Rossi, María Cecilia; Bruno, Luciana; Levi, Valeria
2017-01-01
Fluorescence fluctuation-based methods are non-invasive microscopy tools especially suited for the study of dynamical aspects of biological processes. These methods examine spontaneous intensity fluctuations produced by fluorescent molecules moving through the small, femtoliter-sized observation volume defined in confocal and multiphoton microscopes. The quantitative analysis of the intensity trace provides information on the processes producing the fluctuations that include diffusion, binding interactions, chemical reactions and photophysical phenomena. In this review, we present the basic principles of the most widespread fluctuation-based methods, discuss their implementation in standard confocal microscopes and briefly revise some examples of their applications to address relevant questions in living cells. The ultimate goal of these methods in the Cell Biology field is to observe biomolecules as they move, interact with targets and perform their biological action in the natural context. © 2016 IUBMB Life, 69(1):8-15, 2017. © 2016 International Union of Biochemistry and Molecular Biology.
Model of Values-Based Management Process in Schools: A Mixed Design Study
ERIC Educational Resources Information Center
Dogan, Soner
2016-01-01
The aim of this paper is to evaluate the school administrators' values-based management behaviours according to the teachers' perceptions and opinions and, accordingly, to build a model of values-based management process in schools. The study was conducted using explanatory design which is inclusive of both quantitative and qualitative methods.…
2012-12-01
IR remote sensing o ers a measurement method to detect gaseous species in the outdoor environment. Two major obstacles limit the application of this... method in quantitative analysis : (1) the e ect of both temperature and concentration on the measured spectral intensities and (2) the di culty and...crucial. In this research, particle swarm optimization, a population- based optimization method was applied. Digital ltering and wavelet processing methods
Prediction of Time Response of Electrowetting
NASA Astrophysics Data System (ADS)
Lee, Seung Jun; Hong, Jiwoo; Kang, Kwan Hyoung
2009-11-01
It is very important to predict the time response of electrowetting-based devices, such as liquid lenses, reflective displays, and optical switches. We investigated the time response of electrowetting, based on an analytical and a numerical method, to find out characteristic scales and a scaling law for the switching time. For this, spreading process of a sessile droplet was analyzed based on the domain perturbation method. First, we considered the case of weakly viscous fluids. The analytical result for the spreading process was compared with experimental results, which showed very good agreement in overall time response. It was shown that the overall dynamics is governed by P2 shape mode. We derived characteristic scales combining the droplet volume, density, and surface tension. The overall dynamic process was scaled quite well by the scales. A scaling law was derived from the analytical solution and was verified experimentally. We also suggest a scaling law for highly viscous liquids, based on results of numerical analysis for the electrowetting-actuated spreading process.
Arithmetic of five-part of leukocytes based on image process
NASA Astrophysics Data System (ADS)
Li, Yian; Wang, Guoyou; Liu, Jianguo
2007-12-01
This paper apply computer image processing and pattern recognizition methods to solve the problem of auto classification and counting of leukocytes (white blood cell) in peripheral blood. In this paper a new leukocyte arithmetic of five-part based on image process and pattern recognizition is presented, which relized auto classify of leukocyte. The first aim is detect the leukocytes . A major requirement of the whole system is to classify these leukocytes to 5 classes. This arithmetic bases on notability mechanism of eyes, process image by sequence, divides up leukocytes and pick up characters. Using the prior kwonledge of cells and image shape information, this arithmetic divides up the probable shape of Leukocyte first by a new method based on Chamfer and then gets the detail characters. It can reduce the mistake judge rate and the calculation greatly. It also has the learning fuction. This paper also presented a new measurement of karyon's shape which can provide more accurate information. This algorithm has great application value in clinical blood test .
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
ON DEVELOPING TOOLS AND METHODS FOR ENVIRONMENTALLY BENIGN PROCESSES
Two types of tools are generally needed for designing processes and products that are cleaner from environmental impact perspectives. The first kind is called process tools. Process tools are based on information obtained from experimental investigations in chemistry., material s...
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.
Hou, Xiang-Mei; Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-07-01
To study and establish a monitoring method for macroporous resin column chromatography process of salvianolic acids by using near infrared spectroscopy (NIR) as a process analytical technology (PAT).The multivariate statistical process control (MSPC) model was developed based on 7 normal operation batches, and 2 test batches (including one normal operation batch and one abnormal operation batch) were used to verify the monitoring performance of this model. The results showed that MSPC model had a good monitoring ability for the column chromatography process. Meanwhile, NIR quantitative calibration model was established for three key quality indexes (rosmarinic acid, lithospermic acid and salvianolic acid B) by using partial least squares (PLS) algorithm. The verification results demonstrated that this model had satisfactory prediction performance. The combined application of the above two models could effectively achieve real-time monitoring for macroporous resin column chromatography process of salvianolic acids, and can be used to conduct on-line analysis of key quality indexes. This established process monitoring method could provide reference for the development of process analytical technology for traditional Chinese medicines manufacturing. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting
2018-03-18
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.
Digital Video Cameras for Brainstorming and Outlining: The Process and Potential
ERIC Educational Resources Information Center
Unger, John A.; Scullion, Vicki A.
2013-01-01
This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…
Davalos, Rafael V [Oakland, CA; Ellis, Christopher R. B. [Oakland, CA
2010-08-17
Disclosed is an apparatus and method for inserting one or several chemical or biological species into phospholipid containers that are controlled within a microfluidic network, wherein individual containers are tracked and manipulated by electric fields and wherein the contained species may be chemically processed.
Davalos, Rafael V [Oakland, CA; Ellis, Christopher R. B. [Oakland, CA
2008-03-04
Disclosed is an apparatus and method for inserting one or several chemical or biological species into phospholipid containers that are controlled within a microfluidic network, wherein individual containers are tracked and manipulated by electric fields and wherein the contained species may be chemically processed.
Processing bronchial sonograms to detect respiratory cycle fragments
NASA Astrophysics Data System (ADS)
Bureev, A. Sh; Zhdanov, D. S.; Zemlyakov, I. Yu; Svetlik, M. V.
2014-10-01
This article describes the authors' results of work on the development of a method for the automated assessment of the state of the human bronchopulmonary system based on acoustic data. In particular, the article covers the method of detecting breath sounds on bronchial sonograms obtained during the auscultation process.
Using Mixed Methods to Assess Initiatives with Broad-Based Goals
ERIC Educational Resources Information Center
Inkelas, Karen Kurotsuchi
2017-01-01
This chapter describes a process for assessing programmatic initiatives with broad-ranging goals with the use of a mixed-methods design. Using an example of a day-long teaching development conference, this chapter provides practitioners step-by-step guidance on how to implement this assessment process.
GIS-based measurements that combine native raster and native vector data are commonly used to assess environmental quality. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used beca...
EVALUATION OF A TEST METHOD FOR MEASURING INDOOR AIR EMISSIONS FROM DRY-PROCESS PHOTOCOPIERS
A large chamber test method for measuring indoor air emissions from office equipment was developed, evaluated, and revised based on the initial testing of four dry-process photocopiers. Because all chambers may not necessarily produce similar results (e.g., due to differences in ...
ERIC Educational Resources Information Center
Hahn, William G.; Bart, Barbara D.
2003-01-01
Business students were taught a total quality management-based outlining process for course readings and a tally method to measure learning efficiency. Comparison of 233 who used the process and 99 who did not showed that the group means of users' test scores were 12.4 points higher than those of nonusers. (Contains 25 references.) (SK)
Simulated electronic heterodyne recording and processing of pulsed-laser holograms
NASA Technical Reports Server (NTRS)
Decker, A. J.
1979-01-01
The electronic recording of pulsed-laser holograms is proposed. The polarization sensitivity of each resolution element of the detector is controlled independently to add an arbitrary phase to the image waves. This method which can be used to simulate heterodyne recording and to process three-dimensional optical images, is based on a similar method for heterodyne recording and processing of continuous-wave holograms.
Liu, Qiang; Chai, Tianyou; Wang, Hong; Qin, Si-Zhao Joe
2011-12-01
The continuous annealing process line (CAPL) of cold rolling is an important unit to improve the mechanical properties of steel strips in steel making. In continuous annealing processes, strip tension is an important factor, which indicates whether the line operates steadily. Abnormal tension profile distribution along the production line can lead to strip break and roll slippage. Therefore, it is essential to estimate the whole tension profile in order to prevent the occurrence of faults. However, in real annealing processes, only a limited number of strip tension sensors are installed along the machine direction. Since the effects of strip temperature, gas flow, bearing friction, strip inertia, and roll eccentricity can lead to nonlinear tension dynamics, it is difficult to apply the first-principles induced model to estimate the tension profile distribution. In this paper, a novel data-based hybrid tension estimation and fault diagnosis method is proposed to estimate the unmeasured tension between two neighboring rolls. The main model is established by an observer-based method using a limited number of measured tensions, speeds, and currents of each roll, where the tension error compensation model is designed by applying neural networks principal component regression. The corresponding tension fault diagnosis method is designed using the estimated tensions. Finally, the proposed tension estimation and fault diagnosis method was applied to a real CAPL in a steel-making company, demonstrating the effectiveness of the proposed method.
Feng, Guo; Chen, Yun-Long; Li, Wei; Li, Lai-Lai; Wu, Zeng-Guang; Wu, Zi-Jun; Hai, Yue; Zhang, Si-Chao; Zheng, Chuan-Qi; Liu, Chang-Xiao; He, Xin
2018-06-01
Radix Wikstroemia indica (RWI), named "Liao Ge Wang" in Chinese, is a kind of toxic Chinese herbal medicine (CHM) commonly used in Miao nationality of South China. "Sweat soaking method" processed RWI could effectively decrease its toxicity and preserve therapeutic effect. However, the underlying mechanism of processing is still not clear, and the Q-markers database for processed RWI has not been established. Our study is to investigate and establish the quality evaluation system and potential Q-markers based on "effect-toxicity-chemicals" relationship of RWI for quality/safety assessment of "sweat soaking method" processing. The variation of RWI in efficacy and toxicity before and after processing was investigated by pharmacological and toxicological studies. Cytotoxicity test was used to screen the cytotoxicity of components in RWI. The material basis in ethanol extract of raw and processed RWI was studied by UPLC-Q-TOF/MS. And the potential Q-markers were analyzed and predicted according to "effect-toxicity-chemical" relationship. RWI was processed by "sweat soaking method", which could preserve efficacy and reduce toxicity. Raw RWI and processed RWI did not show significant difference on the antinociceptive and anti-inflammatory effect, however, the injury of liver and kidney by processed RWI was much weaker than that by raw RWI. The 20 compounds were identified from the ethanol extract of raw product and processed product of RWI using UPLC-Q-TOF/MS, including daphnoretin, emodin, triumbelletin, dibutyl phthalate, Methyl Paraben, YH-10 + OH and matairesinol, arctigenin, kaempferol and physcion. Furthermore, 3 diterpenoids (YH-10, YH-12 and YH-15) were proved to possess the high toxicity and decreased by 48%, 44% and 65%, respectively, which could be regarded as the potential Q-markers for quality/safety assessment of "sweat soaking method" processed RWI. A Q-marker database of processed RWI by "sweat soaking method" was established according to the results and relationship of "effect-toxicity-chemicals", which provided a scientific evidence for processing methods, mechanism and the clinical application of RWI, also provided experimental results to explore the application of Q-marker in CHM. Copyright © 2018 Elsevier GmbH. All rights reserved.
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Analysis of a Knowledge-Management-Based Process of Transferring Project Management Skills
ERIC Educational Resources Information Center
Ioi, Toshihiro; Ono, Masakazu; Ishii, Kota; Kato, Kazuhiko
2012-01-01
Purpose: The purpose of this paper is to propose a method for the transfer of knowledge and skills in project management (PM) based on techniques in knowledge management (KM). Design/methodology/approach: The literature contains studies on methods to extract experiential knowledge in PM, but few studies exist that focus on methods to convert…
ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
Experimental Study for Automatic Colony Counting System Based Onimage Processing
NASA Astrophysics Data System (ADS)
Fang, Junlong; Li, Wenzhe; Wang, Guoxin
Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.
Non-isothermal crystallization kinetics of eucalyptus lignosulfonate/polyvinyl alcohol composite.
Ye, De-Zhan; Zhang, Xi; Gu, Shaojin; Zhou, Yingshan; Xu, Weilin
2017-04-01
The nonisothermal crystallinization kinetic was performed on Polyvinyl alcohol (PVA) mixed with eucalyptus lignosulfonate calcuim (HLS) as the biobased thermal stabilizer, which was systematically analyzed based on Jeziorny model, Ozawa equation and the Mo method. The results indicated that the entire crystallization process took place through two main stages involving the primary and secondary crystallization processes. The Mo method described nonisothermal crystallization behavior well. Based on the results of the half time for completing crystallization, k c value in Jeziorny model, F(T) value in Mo method and crystallization activation energy, it was concluded that low loading of HLS accelerated PVA crystallization process, however, the growth rate of PVA crystallization was impeded at high content of HLS. Copyright © 2017 Elsevier B.V. All rights reserved.
SIGKit: a New Data-based Software for Learning Introductory Geophysics
NASA Astrophysics Data System (ADS)
Zhang, Y.; Kruse, S.; George, O.; Esmaeili, S.; Papadimitrios, K. S.; Bank, C. G.; Cadmus, A.; Kenneally, N.; Patton, K.; Brusher, J.
2016-12-01
Students of diverse academic backgrounds take introductory geophysics courses to learn the theory of a variety of measurement and analysis methods with the expectation to be able to apply their basic knowledge to real data. Ideally, such data is collected in field courses and also used in lecture-based courses because they provide a critical context for better learning and understanding of geophysical methods. Each method requires a separate software package for the data processing steps, and the complexity and variety of professional software makes the path through data processing to data interpretation a strenuous learning process for students and a challenging teaching task for instructors. SIGKit (Student Investigation of Geophysics Toolkit) being developed as a collaboration between the University of South Florida, the University of Toronto, and MathWorks intends to address these shortcomings by showing the most essential processing steps and allowing students to visualize the underlying physics of the various methods. It is based on MATLAB software and offered as an easy-to-use graphical user interface and packaged so it can run as an executable in the classroom and the field even on computers without MATLAB licenses. An evaluation of the software based on student feedback from focus-group interviews and think-aloud observations helps drive its development and refinement. The toolkit provides a logical gateway into the more sophisticated and costly software students will encounter later in their training and careers by combining essential visualization, modeling, processing, and analysis steps for seismic, GPR, magnetics, gravity, resistivity, and electromagnetic data.
Research accomplished at the Knowledge Based Systems Lab: IDEF3, version 1.0
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Menzel, Christopher P.; Mayer, Paula S. D.
1991-01-01
An overview is presented of the foundations and content of the evolving IDEF3 process flow and object state description capture method. This method is currently in beta test. Ongoing efforts in the formulation of formal semantics models for descriptions captured in the outlined form and in the actual application of this method can be expected to cause an evolution in the method language. A language is described for the representation of process and object state centered system description. IDEF3 is a scenario driven process flow modeling methodology created specifically for these types of descriptive activities.
A Model-based B2B (Batch to Batch) Control for An Industrial Batch Polymerization Process
NASA Astrophysics Data System (ADS)
Ogawa, Morimasa
This paper describes overview of a model-based B2B (batch to batch) control for an industrial batch polymerization process. In order to control the reaction temperature precisely, several methods based on the rigorous process dynamics model are employed at all design stage of the B2B control, such as modeling and parameter estimation of the reaction kinetics which is one of the important part of the process dynamics model. The designed B2B control consists of the gain scheduled I-PD/II2-PD control (I-PD with double integral control), the feed-forward compensation at the batch start time, and the model adaptation utilizing the results of the last batch operation. Throughout the actual batch operations, the B2B control provides superior control performance compared with that of conventional control methods.
ERIC Educational Resources Information Center
Osterby, Bruce
1989-01-01
Described is an activity which demonstrates an organic-based reprographic method that is used extensively for the duplication of microfilm and engineering drawings. Discussed are the chemistry of the process and how to demonstrate the process for students. (CW)
Hybrid Method for Mobile learning Cooperative: Study of Timor Leste
NASA Astrophysics Data System (ADS)
da Costa Tavares, Ofelia Cizela; Suyoto; Pranowo
2018-02-01
In the modern world today the decision support system is very useful to help in solving a problem, so this study discusses the learning process of savings and loan cooperatives in Timor Leste. The purpose of the observation is that the people of Timor Leste are still in the process of learning the use DSS for good saving and loan cooperative process. Based on existing research on the Timor Leste community on credit cooperatives, a mobile application will be built that will help the cooperative learning process in East Timorese society. The methods used for decision making are AHP (Analytical Hierarchy Process) and SAW (simple additive Weighting) method to see the result of each criterion and the weight of the value. The result of this research is mobile leaning cooperative in decision support system by using SAW and AHP method. Originality Value: Changed the two methods of mobile application development using AHP and SAW methods to help the decision support system process of a savings and credit cooperative in Timor Leste.
Distributed processing method for arbitrary view generation in camera sensor network
NASA Astrophysics Data System (ADS)
Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki
2003-05-01
Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
Dimensional Precision Research of Wax Molding Rapid Prototyping based on Droplet Injection
NASA Astrophysics Data System (ADS)
Mingji, Huang; Geng, Wu; yan, Shan
2017-11-01
The traditional casting process is complex, the mold is essential products, mold quality directly affect the quality of the product. With the method of rapid prototyping 3D printing to produce mold prototype. The utility wax model has the advantages of high speed, low cost and complex structure. Using the orthogonal experiment as the main method, analysis each factors of size precision. The purpose is to obtain the optimal process parameters, to improve the dimensional accuracy of production based on droplet injection molding.
Panzer, Fabian; Hanft, Dominik; Gujar, Tanaji P; Kahle, Frank-Julian; Thelakkat, Mukundan; Köhler, Anna; Moos, Ralf
2016-04-08
We present the successful fabrication of CH₃NH₃PbI₃ perovskite layers by the aerosol deposition method (ADM). The layers show high structural purity and compactness, thus making them suitable for application in perovskite-based optoelectronic devices. By using the aerosol deposition method we are able to decouple material synthesis from layer processing. Our results therefore allow for enhanced and easy control over the fabrication of perovskite-based devices, further paving the way for their commercialization.
Hardware based redundant multi-threading inside a GPU for improved reliability
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-05-05
A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.
Pressure filtration of ceramic pastes. 4: Treatment of experimental data
NASA Technical Reports Server (NTRS)
Torrecillas, A. S.; Polo, J. F.; Perez, A. A.
1984-01-01
The use of data processing method based on the algorithm proposed by Kalman and its application to the filtration process at constant pressure are described, as well as the advantages of this method. This technique is compared to the least squares method. The operation allows the precise parameter adjustment of the equation in direct relationship to the specific resistance of the cake.
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
A real-time spike sorting method based on the embedded GPU.
Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng
2017-07-01
Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.
NASA Astrophysics Data System (ADS)
Forquin, P.; Lukić, B.
2017-11-01
The spalling technique based on the use of a single Hopkinson bar put in contact with the tested sample has been widely adopted as a reliable method for obtaining the tensile response of concrete and rock-like materials at strain rates up-to 200 s- 1. However, the traditional processing method, based on the use of Novikov acoustic approach and the rear face velocity measurement, remains quite questionable due to strong approximations of this data processing method. Recently a new technique for deriving cross-sectional stress fields of a spalling sample filmed with an ultra-high speed camera and based on using the full field measurements and the virtual fields method (VFM) was proposed. In the present work, this topic is perused by performing several spalling tests on ordinary concrete at high acquisition speed of 1Mfps to accurately measure the tensile strength, Young's modulus, strain-rate at failure and stress-strain response of concrete at high strain-rate. The stress-strain curves contain more measurement points for a more reliable identification. The observed tensile stiffness is up-to 50% lower than the initial compressive stiffness and the obtained peak stress was about 20% lower than the one obtained by applying the Novikov method. In order to support this claim, numerical simulations were performed to show that the change of stiffness between compression and tension highly affects the rear-face velocity profile. This further suggests that the processing based only on the velocity "pullback" is quite sensitive and can produce an overestimate of the tensile strength in concrete and rock-like materials.
White, David J.; Congedo, Marco; Ciorciari, Joseph
2014-01-01
A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID:25374520
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Reconfigurable environmentally adaptive computing
NASA Technical Reports Server (NTRS)
Coxe, Robin L. (Inventor); Galica, Gary E. (Inventor)
2008-01-01
Described are methods and apparatus, including computer program products, for reconfigurable environmentally adaptive computing technology. An environmental signal representative of an external environmental condition is received. A processing configuration is automatically selected, based on the environmental signal, from a plurality of processing configurations. A reconfigurable processing element is reconfigured to operate according to the selected processing configuration. In some examples, the environmental condition is detected and the environmental signal is generated based on the detected condition.
Developing an expert panel process to refine health outcome definitions in observational data.
Fox, Brent I; Hollingsworth, Joshua C; Gray, Michael D; Hollingsworth, Michael L; Gao, Juan; Hansen, Richard A
2013-10-01
Drug safety surveillance using observational data requires valid adverse event, or health outcome of interest (HOI) measurement. The objectives of this study were to develop a method to review HOI definitions in claims databases using (1) web-based digital tools to present de-identified patient data, (2) a systematic expert panel review process, and (3) a data collection process enabling analysis of concepts-of-interest that influence panelists' determination of HOI. De-identified patient data were presented via an interactive web-based dashboard to enable case review and determine if specific HOIs were present or absent. Criteria for determining HOIs and their severity were provided to each panelist. Using a modified Delphi method, six panelist pairs independently reviewed approximately 200 cases across each of three HOIs (acute liver injury, acute kidney injury, and acute myocardial infarction) such that panelist pairs independently reviewed the same cases. Panelists completed an assessment within the dashboard for each case that included their assessment of the presence or absence of the HOI, HOI severity (if present), and data contributing to their decision. Discrepancies within panelist pairs were resolved during a consensus process. Dashboard development was iterative, focusing on data presentation and recording panelists' assessments. Panelists reported quickly learning how to use the dashboard. The assessment module was used consistently. The dashboard was reliable, enabling an efficient review process for panelists. Modifications were made to the dashboard and review process when necessary to facilitate case review. Our methods should be applied to other health outcomes of interest to further refine the dashboard and case review process. The expert review process was effective and was supported by the web-based dashboard. Our methods for case review and classification can be applied to future methods for case identification in observational data sources. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Valentijn, Pim P; Ruwaard, Dirk; Vrijhoef, Hubertus J M; de Bont, Antoinette; Arends, Rosa Y; Bruijnzeels, Marc A
2015-10-09
Collaborative partnerships are considered an essential strategy for integrating local disjointed health and social services. Currently, little evidence is available on how integrated care arrangements between professionals and organisations are achieved through the evolution of collaboration processes over time. The first aim was to develop a typology of integrated care projects (ICPs) based on the final degree of integration as perceived by multiple stakeholders. The second aim was to study how types of integration differ in changes of collaboration processes over time and final perceived effectiveness. A longitudinal mixed-methods study design based on two data sources (surveys and interviews) was used to identify the perceived degree of integration and patterns in collaboration among 42 ICPs in primary care in The Netherlands. We used cluster analysis to identify distinct subgroups of ICPs based on the final perceived degree of integration from a professional, organisational and system perspective. With the use of ANOVAs, the subgroups were contrasted based on: 1) changes in collaboration processes over time (shared ambition, interests and mutual gains, relationship dynamics, organisational dynamics and process management) and 2) final perceived effectiveness (i.e. rated success) at the professional, organisational and system levels. The ICPs were classified into three subgroups with: 'United Integration Perspectives (UIP)', 'Disunited Integration Perspectives (DIP)' and 'Professional-oriented Integration Perspectives (PIP)'. ICPs within the UIP subgroup made the strongest increase in trust-based (mutual gains and relationship dynamics) as well as control-based (organisational dynamics and process management) collaboration processes and had the highest overall effectiveness rates. On the other hand, ICPs with the DIP subgroup decreased on collaboration processes and had the lowest overall effectiveness rates. ICPs within the PIP subgroup increased in control-based collaboration processes (organisational dynamics and process management) and had the highest effectiveness rates at the professional level. The differences across the three subgroups in terms of the development of collaboration processes and the final perceived effectiveness provide evidence that united stakeholders' perspectives are achieved through a constructive collaboration process over time. Disunited perspectives at the professional, organisation and system levels can be aligned by both trust-based and control-based collaboration processes.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Filtering and left ventricle segmentation of the fetal heart in ultrasound images
NASA Astrophysics Data System (ADS)
Vargas-Quintero, Lorena; Escalante-Ramírez, Boris
2013-11-01
In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.
Chatter detection in milling process based on VMD and energy entropy
NASA Astrophysics Data System (ADS)
Liu, Changfu; Zhu, Lida; Ni, Chenbing
2018-05-01
This paper presents a novel approach to detect the milling chatter based on Variational Mode Decomposition (VMD) and energy entropy. VMD has already been employed in feature extraction from non-stationary signals. The parameters like number of modes (K) and the quadratic penalty (α) need to be selected empirically when raw signal is decomposed by VMD. Aimed at solving the problem how to select K and α, the automatic selection method of VMD's based on kurtosis is proposed in this paper. When chatter occurs in the milling process, energy will be absorbed to chatter frequency bands. To detect the chatter frequency bands automatically, the chatter detection method based on energy entropy is presented. The vibration signal containing chatter frequency is simulated and three groups of experiments which represent three cutting conditions are conducted. To verify the effectiveness of method presented by this paper, chatter feather extraction has been successfully employed on simulation signals and experimental signals. The simulation and experimental results show that the proposed method can effectively detect the chatter.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Aarons, Gregory A; Fettes, Danielle L; Sommerfeld, David H; Palinkas, Lawrence A
2012-02-01
Many public sector service systems and provider organizations are in some phase of learning about or implementing evidence-based interventions. Child welfare service systems represent a context where implementation spans system, management, and organizational concerns. Research utilizing mixed methods that combine qualitative and quantitative design, data collection, and analytic approaches are particularly well suited to understanding both the process and outcomes of dissemination and implementation efforts in child welfare systems. This article describes the process of using mixed methods in implementation research and provides an applied example of an examination of factors impacting staff retention during an evidence-based intervention implementation in a statewide child welfare system. The authors integrate qualitative data with previously published quantitative analyses of job autonomy and staff turnover during this statewide implementation project in order to illustrate the utility of mixed method approaches in providing a more comprehensive understanding of opportunities and challenges in implementation research.
Aarons, Gregory A.; Fettes, Danielle L.; Sommerfeld, David H.; Palinkas, Lawrence
2013-01-01
Many public sector services systems and provider organizations are in some phase of learning about or implementing evidence-based interventions. Child welfare service systems represent a context where implementation spans system, management, and organizational concerns. Research utilizing mixed methods that combine qualitative and quantitative design, data collection, and analytic approaches are particularly well-suited to understanding both the process and outcomes of dissemination and implementation efforts in child welfare systems. This paper describes the process of using mixed methods in implementation research and provides an applied example of an examination of factors impacting staff retention during an evidence-based intervention implementation in a statewide child welfare system. We integrate qualitative data with previously published quantitative analyses of job autonomy and staff turnover during this statewide implementation project in order to illustrate the utility of mixed method approaches in providing a more comprehensive understanding of opportunities and challenges in implementation research. PMID:22146861
Weernink, Marieke G M; Groothuis-Oudshoorn, Catharina G M; IJzerman, Maarten J; van Til, Janine A
2016-01-01
The objective of this study was to compare treatment profiles including both health outcomes and process characteristics in Parkinson disease using best-worst scaling (BWS), time trade-off (TTO), and visual analogue scales (VAS). From the model comprising of seven attributes with three levels, six unique profiles were selected representing process-related factors and health outcomes in Parkinson disease. A Web-based survey (N = 613) was conducted in a general population to estimate process-related utilities using profile-based BWS (case 2), multiprofile-based BWS (case 3), TTO, and VAS. The rank order of the six profiles was compared, convergent validity among methods was assessed, and individual analysis focused on the differentiation between pairs of profiles with methods used. The aggregated health-state utilities for the six treatment profiles were highly comparable for all methods and no rank reversals were identified. On the individual level, the convergent validity between all methods was strong; however, respondents differentiated less in the utility of closely related treatment profiles with a VAS or TTO than with BWS. For TTO and VAS, this resulted in nonsignificant differences in mean utilities for closely related treatment profiles. This study suggests that all methods are equally able to measure process-related utility when the aim is to estimate the overall value of treatments. On an individual level, such as in shared decision making, BWS allows for better prioritization of treatment alternatives, especially if they are closely related. The decision-making problem and the need for explicit trade-off between attributes should determine the choice for a method. Copyright © 2016. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Said, Asnah; Syarif, Edy
2016-01-01
This research aimed to evaluate of online tutorial program design by applying problem-based learning Research Methods currently implemented in the system of Open Distance Learning (ODL). The students must take a Research Methods course to prepare themselves for academic writing projects. Problem-based learning basically emphasizes the process of…
ERIC Educational Resources Information Center
Schettino, Carmel
2016-01-01
One recommendation for encouraging young women and other underrepresented students in their mathematical studies is to find instructional methods, such as problem-based learning (PBL), that allow them to feel included in the learning process. Using a more relationally centered pedagogy along with more inclusive instructional methods may be a way…
A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space
Zheng, Wei; Zhang, Xiaoya; Lu, Qi
2015-01-01
This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618
Applying an analytical method to study neutron behavior for dosimetry
NASA Astrophysics Data System (ADS)
Shirazi, S. A. Mousavi
2016-12-01
In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.
A comparison of moving object detection methods for real-time moving object detection
NASA Astrophysics Data System (ADS)
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Stuit, Marco; Wortmann, Hans; Szirbik, Nick; Roodenburg, Jan
2011-12-01
In the healthcare domain, human collaboration processes (HCPs), which consist of interactions between healthcare workers from different (para)medical disciplines and departments, are of growing importance as healthcare delivery becomes increasingly integrated. Existing workflow-based process modelling tools for healthcare process management, which are the most commonly applied, are not suited for healthcare HCPs mainly due to their focus on the definition of task sequences instead of the graphical description of human interactions. This paper uses a case study of a healthcare HCP at a Dutch academic hospital to evaluate a novel interaction-centric process modelling method. The HCP under study is the care pathway performed by the head and neck oncology team. The evaluation results show that the method brings innovative, effective, and useful features. First, it collects and formalizes the tacit domain knowledge of the interviewed healthcare workers in individual interaction diagrams. Second, the method automatically integrates these local diagrams into a single global interaction diagram that reflects the consolidated domain knowledge. Third, the case study illustrates how the method utilizes a graphical modelling language for effective tree-based description of interactions, their composition and routing relations, and their roles. A process analysis of the global interaction diagram is shown to identify HCP improvement opportunities. The proposed interaction-centric method has wider applicability since interactions are the core of most multidisciplinary patient-care processes. A discussion argues that, although (multidisciplinary) collaboration is in many cases not optimal in the healthcare domain, it is increasingly considered a necessity to improve integration, continuity, and quality of care. The proposed method is helpful to describe, analyze, and improve the functioning of healthcare collaboration. Copyright © 2011 Elsevier Inc. All rights reserved.
Automated, on-board terrain analysis for precision landings
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.
Fragment-based design of kinase inhibitors: a practical guide.
Erickson, Jon A
2015-01-01
Fragment-based drug design has become an important strategy for drug design and development over the last decade. It has been used with particular success in the development of kinase inhibitors, which are one of the most widely explored classes of drug targets today. The application of fragment-based methods to discovering and optimizing kinase inhibitors can be a complicated and daunting task; however, a general process has emerged that has been highly fruitful. Here a practical outline of the fragment process used in kinase inhibitor design and development is laid out with specific examples. A guide to the overall process from initial discovery through fragment screening, including the difficulties in detection, to the computational methods available for use in optimization of the discovered fragments is reported.
Xiao, Liangpin; Liu, Xianming; Zhong, Runtao; Zhang, Kaiqing; Zhang, Xiaodi; Zhou, Xiaomian; Lin, Bingcheng; Du, Yuguang
2013-11-01
Three-dimensional (3D) paper-based microfluidics, which is featured with high performance and speedy determination, promise to carry out multistep sample pretreatment and orderly chemical reaction, which have been used for medical diagnosis, cell culture, environment determination, and so on with broad market prospect. However, there are some drawbacks in the existing fabrication methods for 3D paper-based microfluidics, such as, cumbersome and time-consuming device assembly; expensive and difficult process for manufacture; contamination caused by organic reagents from their fabrication process. Here, we present a simple printing-bookbinding method for mass fabricating 3D paper-based microfluidics. This approach involves two main steps: (i) wax-printing, (ii) bookbinding. We tested the delivery capability, diffusion rate, homogeneity and demonstrated the applicability of the device to chemical analysis by nitrite colorimetric assays. The described method is rapid (<30 s), cheap, easy to manipulate, and compatible with the flat stitching method that is common in a print house, making itself an ideal scheme for large-scale production of 3D paper-based microfluidics. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yao, Lei; Wang, Zhenpo; Ma, Jun
2015-10-01
This paper proposes a method of fault detection of the connection of Lithium-Ion batteries based on entropy for electric vehicle. In electric vehicle operation process, some factors, such as road conditions, driving habits, vehicle performance, always affect batteries by vibration, which easily cause loosing or virtual connection between batteries. Through the simulation of the battery charging and discharging experiment under vibration environment, the data of voltage fluctuation can be obtained. Meanwhile, an optimal filtering method is adopted using discrete cosine filter method to analyze the characteristics of system noise, based on the voltage set when batteries are working under different vibration frequency. Experimental data processed by filtering is analyzed based on local Shannon entropy, ensemble Shannon entropy and sample entropy. And the best way to find a method of fault detection of the connection of lithium-ion batteries based on entropy is presented for electric vehicle. The experimental data shows that ensemble Shannon entropy can predict the accurate time and the location of battery connection failure in real time. Besides electric-vehicle industry, this method can also be used in other areas in complex vibration environment.
Implementing asyncronous collective operations in a multi-node processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip
A method, system, and computer program product are disclosed for implementing an asynchronous collective operation in a multi-node data processing system. In one embodiment, the method comprises sending data to a plurality of nodes in the data processing system, broadcasting a remote get to the plurality of nodes, and using this remote get to implement asynchronous collective operations on the data by the plurality of nodes. In one embodiment, each of the nodes performs only one task in the asynchronous operations, and each nodes sets up a base address table with an entry for a base address of a memorymore » buffer associated with said each node. In another embodiment, each of the nodes performs a plurality of tasks in said collective operations, and each task of each node sets up a base address table with an entry for a base address of a memory buffer associated with the task.« less
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
Techno-economical evaluation of protein extraction for microalgae biorefinery
NASA Astrophysics Data System (ADS)
Sari, Y. W.; Sanders, J. P. M.; Bruins, M. E.
2016-01-01
Due to scarcity of fossil feedstocks, there is an increasing demand for biobased fuels. Microalgae are considered as promising biobased feedstocks. However, microalgae based fuels are not yet produced at large scale at present. Applying biorefinery, not only for oil, but also for other components, such as carbohydrates and protein, may lead to the sustainable and economical microalgae-based fuels. This paper discusses two relatively mild conditions for microalgal protein extraction, based on alkali and enzymes. Green microalgae (Chlorella fusca) with and without prior lipid removal were used as feedstocks. Under mild conditions, more protein could be extracted using proteases, with the highest yields for microalgae meal (without lipids). The data on protein extraction yields were used to calculate the costs for producing 1 ton of microalgal protein. The processing cost for the alkaline method was € 2448 /ton protein. Enzymatic method performed better from an economic point of view with € 1367 /ton protein on processing costs. However, this is still far from industrially feasible. For both extraction methods, biomass cost per ton of produced product were high. A higher protein extraction yield can partially solve this problem, lowering processing cost to €620 and 1180 /ton protein product, using alkali and enzyme, respectively. Although alkaline method has lower processing cost, optimization appears to be better achievable using enzymes. If the enzymatic method can be optimized by lowering the amount of alkali added, leading to processing cost of € 633/ton protein product. Higher revenue can be generated when the residue after protein extraction can be sold as fuel, or better as a highly digestible feed for cattle.
The role of strategies in motor learning
Taylor, Jordan A.; Ivry, Richard B.
2015-01-01
There has been renewed interest in the role of strategies in sensorimotor learning. The combination of new behavioral methods and computational methods has begun to unravel the interaction between processes related to strategic control and processes related to motor adaptation. These processes may operate on very different error signals. Strategy learning is sensitive to goal-based performance error. In contrast, adaptation is sensitive to prediction errors between the desired and actual consequences of a planned movement. The former guides what the desired movement should be, whereas the latter guides how to implement the desired movement. Whereas traditional approaches have favored serial models in which an initial strategy-based phase gives way to more automatized forms of control, it now seems that strategic and adaptive processes operate with considerable independence throughout learning, although the relative weight given the two processes will shift with changes in performance. As such, skill acquisition involves the synergistic engagement of strategic and adaptive processes. PMID:22329960
A Science and Risk-Based Pragmatic Methodology for Blend and Content Uniformity Assessment.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Doshi, Chetan
2018-04-01
This paper describes a pragmatic approach that can be applied in assessing powder blend and unit dosage uniformity of solid dose products at Process Design, Process Performance Qualification, and Continued/Ongoing Process Verification stages of the Process Validation lifecycle. The statistically based sampling, testing, and assessment plan was developed due to the withdrawal of the FDA draft guidance for industry "Powder Blends and Finished Dosage Units-Stratified In-Process Dosage Unit Sampling and Assessment." This paper compares the proposed Grouped Area Variance Estimate (GAVE) method with an alternate approach outlining the practicality and statistical rationalization using traditional sampling and analytical methods. The approach is designed to fit solid dose processes assuring high statistical confidence in both powder blend uniformity and dosage unit uniformity during all three stages of the lifecycle complying with ASTM standards as recommended by the US FDA.
Study on algorithm of process neural network for soft sensing in sewage disposal system
NASA Astrophysics Data System (ADS)
Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying
2006-11-01
A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.
CropEx Web-Based Agricultural Monitoring and Decision Support
NASA Technical Reports Server (NTRS)
Harvey. Craig; Lawhead, Joel
2011-01-01
CropEx is a Web-based agricultural Decision Support System (DSS) that monitors changes in crop health over time. It is designed to be used by a wide range of both public and private organizations, including individual producers and regional government offices with a vested interest in tracking vegetation health. The database and data management system automatically retrieve and ingest data for the area of interest. Another stores results of the processing and supports the DSS. The processing engine will allow server-side analysis of imagery with support for image sub-setting and a set of core raster operations for image classification, creation of vegetation indices, and change detection. The system includes the Web-based (CropEx) interface, data ingestion system, server-side processing engine, and a database processing engine. It contains a Web-based interface that has multi-tiered security profiles for multiple users. The interface provides the ability to identify areas of interest to specific users, user profiles, and methods of processing and data types for selected or created areas of interest. A compilation of programs is used to ingest available data into the system, classify that data, profile that data for quality, and make data available for the processing engine immediately upon the data s availability to the system (near real time). The processing engine consists of methods and algorithms used to process the data in a real-time fashion without copying, storing, or moving the raw data. The engine makes results available to the database processing engine for storage and further manipulation. The database processing engine ingests data from the image processing engine, distills those results into numerical indices, and stores each index for an area of interest. This process happens each time new data is ingested and processed for the area of interest, and upon subsequent database entries, the database processing engine qualifies each value for each area of interest and conducts a logical processing of results indicating when and where thresholds are exceeded. Reports are provided at regular, operator-determined intervals that include variances from thresholds and links to view raw data for verification, if necessary. The technology and method of development allow the code base to easily be modified for varied use in the real-time and near-real-time processing environments. In addition, the final product will be demonstrated as a means for rapid draft assessment of imagery.
Yang, Chan; Xu, Bing; Zhang, Zhi-Qiang; Wang, Xin; Shi, Xin-Yuan; Fu, Jing; Qiao, Yan-Jiang
2016-10-01
Blending uniformity is essential to ensure the homogeneity of Chinese medicine formula particles within each batch. This study was based on the blending process of ebony spray dried powder and dextrin(the proportion of dextrin was 10%),in which the analysis of near infrared (NIR) diffuse reflectance spectra was collected from six different sampling points in combination with moving window F test method in order to assess the blending uniformity of the blending process.The method was validated by the changes of citric acid content determined by the HPLC. The results of moving window F test method showed that the ebony spray dried powder and dextrin was homogeneous during 200-300 r and was segregated during 300-400 r. An advantage of this method is that the threshold value is defined statistically, not empirically and thus does not suffer from threshold ambiguities in common with the moving block standard deviatiun (MBSD). And this method could be employed to monitor other blending process of Chinese medicine powders on line. Copyright© by the Chinese Pharmaceutical Association.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
NASA Astrophysics Data System (ADS)
Zhang, Lixin; Lin, Min; Wan, Baikun; Zhou, Yu; Wang, Yizhong
2005-01-01
In this paper, a new method of body fat and its distribution testing is proposed based on CT image processing. As it is more sensitive to slight differences in attenuation than standard radiography, CT depicts the soft tissues with better clarity. And body fat has a distinct grayness range compared with its neighboring tissues in a CT image. An effective multi-thresholds image segmentation method based on potential function clustering is used to deal with multiple peaks in the grayness histogram of a CT image. The CT images of abdomens of 14 volunteers with different fatness are processed with the proposed method. Not only can the result of total fat area be got, but also the differentiation of subcutaneous fat from intra-abdominal fat has been identified. The results show the adaptability and stability of the proposed method, which will be a useful tool for diagnosing obesity.
Real-time traffic sign recognition based on a general purpose GPU and deep-learning
Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran
2017-01-01
We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011
Raina-Fulton, Renata
2015-01-01
Pesticide residue methods have been developed for a wide variety of food products including cereal-based foods, nutraceuticals and related plant products, and baby foods. These cereal, fruit, vegetable, and plant-based products provide the basis for many processed consumer products. For cereal and nutraceuticals, which are dry sample products, a modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) method has been used with additional steps to allow wetting of the dry sample matrix and subsequent cleanup using dispersive or cartridge format SPE to reduce matrix effects. More processed foods may have lower pesticide concentrations but higher co-extracts that can lead to signal suppression or enhancement with MS detection. For complex matrixes, GC/MS/MS or LC/electrospray ionization (positive or negative ion)-MS/MS is more frequently used. The extraction and cleanup methods vary with different sample types particularly for cereal-based products, and these different approaches are discussed in this review. General instrument considerations are also discussed.
NASA Astrophysics Data System (ADS)
Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy
2018-04-01
In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.
Review on the progress in synthesis and application of magnetic carbon nanocomposites.
Zhu, Maiyong; Diao, Guowang
2011-07-01
This review focuses on the synthesis and application of nanostructured composites containing magnetic nanostructures and carbon-based materials. Great progress in fabrication of magnetic carbon nanocomposites has been made by developing methods including filling process, template-based synthesis, chemical vapor deposition, hydrothermal/solvothermal method, pyrolysis procedure, sol-gel process, detonation induced reaction, self-assembly method, etc. The applications of magnetic carbon nanocomposites expanded to a wide range of fields such as environmental treatment, microwave absorption, magnetic recording media, electrochemical sensor, catalysis, separation/recognization of biomolecules and drug delivery are discussed. Finally, some future trends and perspectives in this research area are outlined.
Review on the progress in synthesis and application of magnetic carbon nanocomposites
NASA Astrophysics Data System (ADS)
Zhu, Maiyong; Diao, Guowang
2011-07-01
This review focuses on the synthesis and application of nanostructured composites containing magnetic nanostructures and carbon-based materials. Great progress in fabrication of magnetic carbon nanocomposites has been made by developing methods including filling process, template-based synthesis, chemical vapor deposition, hydrothermal/solvothermal method, pyrolysis procedure, sol-gel process, detonation induced reaction, self-assembly method, etc. The applications of magnetic carbon nanocomposites expanded to a wide range of fields such as environmental treatment, microwave absorption, magnetic recording media, electrochemical sensor, catalysis, separation/recognization of biomolecules and drug delivery are discussed. Finally, some future trends and perspectives in this research area are outlined.
Functional relationship-based alarm processing system
Corsberg, D.R.
1988-04-22
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the functional relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated or deactivated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary. 12 figs.
Functional relationship-based alarm processing
Corsberg, Daniel R.
1988-01-01
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated. Thus, each alarm's importance is continuously oupdated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on caussal factors between two alarms); (3) required action (system response or action) expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary.
Functional relationship-based alarm processing system
Corsberg, Daniel R.
1989-01-01
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the functional relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated or deactivated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary.
COBRApy: COnstraints-Based Reconstruction and Analysis for Python.
Ebrahim, Ali; Lerman, Joshua A; Palsson, Bernhard O; Hyduke, Daniel R
2013-08-08
COnstraint-Based Reconstruction and Analysis (COBRA) methods are widely used for genome-scale modeling of metabolic networks in both prokaryotes and eukaryotes. Due to the successes with metabolism, there is an increasing effort to apply COBRA methods to reconstruct and analyze integrated models of cellular processes. The COBRA Toolbox for MATLAB is a leading software package for genome-scale analysis of metabolism; however, it was not designed to elegantly capture the complexity inherent in integrated biological networks and lacks an integration framework for the multiomics data used in systems biology. The openCOBRA Project is a community effort to promote constraints-based research through the distribution of freely available software. Here, we describe COBRA for Python (COBRApy), a Python package that provides support for basic COBRA methods. COBRApy is designed in an object-oriented fashion that facilitates the representation of the complex biological processes of metabolism and gene expression. COBRApy does not require MATLAB to function; however, it includes an interface to the COBRA Toolbox for MATLAB to facilitate use of legacy codes. For improved performance, COBRApy includes parallel processing support for computationally intensive processes. COBRApy is an object-oriented framework designed to meet the computational challenges associated with the next generation of stoichiometric constraint-based models and high-density omics data sets. http://opencobra.sourceforge.net/
Automated extraction of pleural effusion in three-dimensional thoracic CT images
NASA Astrophysics Data System (ADS)
Kido, Shoji; Tsunomori, Akinori
2009-02-01
It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-01
A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.
Optimizing Web-Based Instruction: A Case Study Using Poultry Processing Unit Operations
ERIC Educational Resources Information Center
O' Bryan, Corliss A.; Crandall, Philip G.; Shores-Ellis, Katrina; Johnson, Donald M.; Ricke, Steven C.; Marcy, John
2009-01-01
Food companies and supporting industries need inexpensive, revisable training methods for large numbers of hourly employees due to continuing improvements in Hazard Analysis Critical Control Point (HACCP) programs, new processing equipment, and high employee turnover. HACCP-based food safety programs have demonstrated their value by reducing the…
ERIC Educational Resources Information Center
Herbert, Birgit; Strauss, Angelika; Mayer, Andrea; Duvinage, Kristin; Mitschek, Christine; Koletzko, Berthold
2013-01-01
Objective: Evaluation of the implementation process of a kindergarten-based intervention ("TigerKids") to promote a healthy lifestyle. Design: Questionnaire survey among kindergarten teachers about programme implementation and acceptance. Setting: Kindergartens in Bavaria, Germany. Methods: Two hundred and fifteen kindergartens were…
Validation of the Evidence-Based Practice Process Assessment Scale
ERIC Educational Resources Information Center
Rubin, Allen; Parrish, Danielle E.
2011-01-01
Objective: This report describes the reliability, validity, and sensitivity of a scale that assesses practitioners' perceived familiarity with, attitudes of, and implementation of the evidence-based practice (EBP) process. Method: Social work practitioners and second-year master of social works (MSW) students (N = 511) were surveyed in four sites…
Conceptual Change through Changing the Process of Comparison
ERIC Educational Resources Information Center
Wasmann-Frahm, Astrid
2009-01-01
Classification can serve as a tool for conceptualising ideas about vertebrates. Training enhances classification skills as well as sharpening concepts. The method described in this paper is based on the "hybrid-model" of comparison that proposes two independently working processes: associative and theory-based. The two interact during a…
Wagner, Robert M [Knoxville, TN; Daw, Charles S [Knoxville, TN; Green, Johney B [Knoxville, TN; Edwards, Kevin D [Knoxville, TN
2008-10-07
This invention is a method of achieving stable, optimal mixtures of HCCI and SI in practical gasoline internal combustion engines comprising the steps of: characterizing the combustion process based on combustion process measurements, determining the ratio of conventional and HCCI combustion, determining the trajectory (sequence) of states for consecutive combustion processes, and determining subsequent combustion process modifications using said information to steer the engine combustion toward desired behavior.
ERIC Educational Resources Information Center
Jiang, Yong
2017-01-01
Traditional mathematical methods built around exactitude have limitations when applied to the processing of educational information, due to their uncertainty and imperfection. Alternative mathematical methods, such as grey system theory, have been widely applied in processing incomplete information systems and have proven effective in a number of…
Introduction to the Management Process (NS 222): Competency-Based Course Syllabus.
ERIC Educational Resources Information Center
Brady, Marilyn H.
"Introduction to the Management Process" (NS 222) is an associate degree nursing course offered at Chattanooga State Technical Community College to introduce students to basic management concepts, methods of nursing care delivery, patient classification systems, and methods of enacting change and working as a change agent. Upon completion of the…
Online Denoising Based on the Second-Order Adaptive Statistics Model.
Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei
2017-07-20
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.
A novel approach of ensuring layout regularity correct by construction in advanced technologies
NASA Astrophysics Data System (ADS)
Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic
2017-03-01
In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.
Peripleural lung disease detection based on multi-slice CT images
NASA Astrophysics Data System (ADS)
Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
With the development of multi-slice CT technology, obtaining accurate 3D images of lung field in a short time become possible. To support that, a lot of image processing methods need to be developed. Detection peripleural lung disease is difficult due to its existence out of lung region, because lung extraction is often performed based on threshold processing. The proposed method uses thoracic inner region extracted by inner cavity of bone as well as air region, covers peripleural lung diseased cases such as lung nodule, calcification, pleural effusion and pleural plaque. We applied this method to 50 cases including 39 peripleural lung diseased cases. This method was able to detect 39 peripleural lung disease with 2.9 false positive per case.
Analysis of Electrowetting Dynamics with Level Set Method
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Hong, Jiwoo; Kang, Kwan Hyoung
2009-11-01
Electrowetting is a versatile tool to handle tiny droplets and forms a backbone of digital microfluidics. Numerical analysis is necessary to fully understand the dynamics of electrowetting, especially in designing electrowetting-based liquid lenses and reflective displays. We developed a numerical method to analyze the general contact-line problems, incorporating dynamic contact angle models. The method was applied to the analysis of spreading process of a sessile droplet for step input voltages in electrowetting. The result was compared with experimental data and analytical result which is based on the spectral method. It is shown that contact line friction significantly affects the contact line motion and the oscillation amplitude. The pinning process of contact line was well represented by including the hysteresis effect in the contact angle models.
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
Computer Mediated Communication: Online Instruction and Interactivity.
ERIC Educational Resources Information Center
Lavooy, Maria J.; Newlin, Michael H.
2003-01-01
Explores the different forms and potential applications of computer mediated communication (CMC) for Web-based and Web-enhanced courses. Based on their experiences with three different Web courses (Research Methods in Psychology, Statistical Methods in Psychology, and Basic Learning Processes) taught repeatedly over the last five years, the…
Results from the VALUE perfect predictor experiment: process-based evaluation
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit
2016-04-01
Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.
Randomized evaluation of a web based interview process for urology resident selection.
Shah, Satyan K; Arora, Sanjeev; Skipper, Betty; Kalishman, Summers; Timm, T Craig; Smith, Anthony Y
2012-04-01
We determined whether a web based interview process for resident selection could effectively replace the traditional on-site interview. For the 2010 to 2011 match cycle, applicants to the University of New Mexico urology residency program were randomized to participate in a web based interview process via Skype or a traditional on-site interview process. Both methods included interviews with the faculty, a tour of facilities and the opportunity to ask current residents any questions. To maintain fairness the applicants were then reinterviewed via the opposite process several weeks later. We assessed comparative effectiveness, cost, convenience and satisfaction using anonymous surveys largely scored on a 5-point Likert scale. Of 39 total participants (33 applicants and 6 faculty) 95% completed the surveys. The web based interview was less costly to applicants (mean $171 vs $364, p=0.05) and required less time away from school (10% missing 1 or more days vs 30%, p=0.04) compared to traditional on-site interview. However, applicants perceived the web based interview process as less effective than traditional on-site interview, with a mean 6-item summative effectiveness score of 21.3 vs 25.6 (p=0.003). Applicants and faculty favored continuing the web based interview process in the future as an adjunct to on-site interviews. Residency interviews can be successfully conducted via the Internet. The web based interview process reduced costs and improved convenience. The findings of this study support the use of videoconferencing as an adjunct to traditional interview methods rather than as a replacement. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
[Detecting fire smoke based on the multispectral image].
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Linear segmentation algorithm for detecting layer boundary with lidar.
Mao, Feiyue; Gong, Wei; Logan, Timothy
2013-11-04
The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.
Grey Comprehensive Evaluation of Biomass Power Generation Project Based on Group Judgement
NASA Astrophysics Data System (ADS)
Xia, Huicong; Niu, Dongxiao
2017-06-01
The comprehensive evaluation of benefit is an important task needed to be carried out at all stages of biomass power generation projects. This paper proposed an improved grey comprehensive evaluation method based on triangle whiten function. To improve the objectivity of weight calculation result of only reference comparison judgment method, this paper introduced group judgment to the weighting process. In the process of grey comprehensive evaluation, this paper invited a number of experts to estimate the benefit level of projects, and optimized the basic estimations based on the minimum variance principle to improve the accuracy of evaluation result. Taking a biomass power generation project as an example, the grey comprehensive evaluation result showed that the benefit level of this project was good. This example demonstrates the feasibility of grey comprehensive evaluation method based on group judgment for benefit evaluation of biomass power generation project.
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2008-03-01
Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.
Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan
2013-11-01
A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.
Research on distributed optical fiber sensing data processing method based on LabVIEW
NASA Astrophysics Data System (ADS)
Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing
2018-01-01
The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.
Using Formal Methods to Assist in the Requirements Analysis of the Space Shuttle GPS Change Request
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Roberts, Larry W.
1996-01-01
We describe a recent NASA-sponsored pilot project intended to gauge the effectiveness of using formal methods in Space Shuttle software requirements analysis. Several Change Requests (CR's) were selected as promising targets to demonstrate the utility of formal methods in this application domain. A CR to add new navigation capabilities to the Shuttle, based on Global Positioning System (GPS) technology, is the focus of this report. Carried out in parallel with the Shuttle program's conventional requirements analysis process was a limited form of analysis based on formalized requirements. Portions of the GPS CR were modeled using the language of SRI's Prototype Verification System (PVS). During the formal methods-based analysis, numerous requirements issues were discovered and submitted as official issues through the normal requirements inspection process. Shuttle analysts felt that many of these issues were uncovered earlier than would have occurred with conventional methods. We present a summary of these encouraging results and conclusions we have drawn from the pilot project.
Application of Ozone MBBR Process in Refinery Wastewater Treatment
NASA Astrophysics Data System (ADS)
Lin, Wang
2018-01-01
Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.
Walsh, Jane C; Groarke, AnnMarie; Moss-Morris, Rona; Morrissey, Eimear; McGuire, Brian E
2017-01-01
Background Cancer-related fatigue (CrF) is the most common and disruptive symptom experienced by cancer survivors. We aimed to develop a theory-based, interactive Web-based intervention designed to facilitate self-management and enhance coping with CrF following cancer treatment. Objective The aim of our study was to outline the rationale, decision-making processes, methods, and findings which led to the development of a Web-based intervention to be tested in a feasibility trial. This paper outlines the process and method of development of the intervention. Methods An extensive review of the literature and qualitative research was conducted to establish a therapeutic approach for this intervention, based on theory. The psychological principles used in the development process are outlined, and we also clarify hypothesized causal mechanisms. We describe decision-making processes involved in the development of the content of the intervention, input from the target patient group and stakeholders, the design of the website features, and the initial user testing of the website. Results The cocreation of the intervention with the experts and service users allowed the design team to ensure that an acceptable intervention was developed. This evidence-based Web-based program is the first intervention of its kind based on self-regulation model theory, with the primary aim of targeting the representations of fatigue and enhancing self-management of CrF, specifically. Conclusions This research sought to integrate psychological theory, existing evidence of effective interventions, empirically derived principles of Web design, and the views of potential users into the systematic planning and design of the intervention of an easy-to-use website for cancer survivors. PMID:28676465
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orimoto, Yuuichi; Xie, Peng; Liu, Kai
2015-03-14
An Elongation-counterpoise (ELG-CP) method was developed for performing accurate and efficient interaction energy analysis and correcting the basis set superposition error (BSSE) in biosystems. The method was achieved by combining our developed ab initio O(N) elongation method with the conventional counterpoise method proposed for solving the BSSE problem. As a test, the ELG-CP method was applied to the analysis of the DNAs’ inter-strands interaction energies with respect to the alkylation-induced base pair mismatch phenomenon that causes a transition from G⋯C to A⋯T. It was found that the ELG-CP method showed high efficiency (nearly linear-scaling) and high accuracy with a negligiblymore » small energy error in the total energy calculations (in the order of 10{sup −7}–10{sup −8} hartree/atom) as compared with the conventional method during the counterpoise treatment. Furthermore, the magnitude of the BSSE was found to be ca. −290 kcal/mol for the calculation of a DNA model with 21 base pairs. This emphasizes the importance of BSSE correction when a limited size basis set is used to study the DNA models and compare small energy differences between them. In this work, we quantitatively estimated the inter-strands interaction energy for each possible step in the transition process from G⋯C to A⋯T by the ELG-CP method. It was found that the base pair replacement in the process only affects the interaction energy for a limited area around the mismatch position with a few adjacent base pairs. From the interaction energy point of view, our results showed that a base pair sliding mechanism possibly occurs after the alkylation of guanine to gain the maximum possible number of hydrogen bonds between the bases. In addition, the steps leading to the A⋯T replacement accompanied with replications were found to be unfavorable processes corresponding to ca. 10 kcal/mol loss in stabilization energy. The present study indicated that the ELG-CP method is promising for performing effective interaction energy analyses in biosystems.« less
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.
2017-02-01
The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints.
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
Evaluation of Methods for Decladding LWR Fuel for a Pyroprocessing-Based Reprocessing Plant
1992-10-01
oAD-A275 326 ORN.rFM-1121o04 OAK RIDGE NATIONAL LABORATORY Evaluation of Methods for Decladding _LWR Fuel for a Pyroprocessing -Based Reprocessing...Dist. Category UC-526 EVALUATION OF METHODS FOR DECLADDING LWR FUEL FOR A PYROPROCESSING -BASED REPROCESSING PLANT W. D. Bond J. C. Mailen G. E...decladding technologies has been performed to identify candidate decladding processes suitable for LWR fuel and compatible with downstream pyroprocesses
System and method for air temperature control in an oxygen transport membrane based reactor
Kelly, Sean M
2016-09-27
A system and method for air temperature control in an oxygen transport membrane based reactor is provided. The system and method involves introducing a specific quantity of cooling air or trim air in between stages in a multistage oxygen transport membrane based reactor or furnace to maintain generally consistent surface temperatures of the oxygen transport membrane elements and associated reactors. The associated reactors may include reforming reactors, boilers or process gas heaters.
System and method for temperature control in an oxygen transport membrane based reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Sean M.
A system and method for temperature control in an oxygen transport membrane based reactor is provided. The system and method involves introducing a specific quantity of cooling air or trim air in between stages in a multistage oxygen transport membrane based reactor or furnace to maintain generally consistent surface temperatures of the oxygen transport membrane elements and associated reactors. The associated reactors may include reforming reactors, boilers or process gas heaters.
Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel
NASA Astrophysics Data System (ADS)
Xie, Yanmin
2011-08-01
Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
Tran, Ngoc Han; Ngo, Huu Hao; Urase, Taro; Gin, Karina Yew-Hoong
2015-10-01
The presence of organic matter (OM) in raw wastewater, treated wastewater effluents, and natural water samples has been known to cause many problems in wastewater treatment and water reclamation processes, such as treatability, membrane fouling, and the formation of potentially toxic by-products during wastewater treatment. This paper summarizes the current knowledge on the methods for characterization and quantification of OM in water samples in relation to wastewater and water treatment processes including: (i) characterization based on the biodegradability; (ii) characterization based on particle size distribution; (iii) fractionation based on the hydrophilic/hydrophobic properties; (iv) characterization based on the molecular weight (MW) size distribution; and (v) characterization based on fluorescence excitation emission matrix. In addition, the advantages, disadvantages and applications of these methods are discussed in detail. The establishment of correlations among biodegradability, hydrophobic/hydrophilic fractions, MW size distribution of OM, membrane fouling and formation of toxic by-products potential is highly recommended for further studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
Smith predictor-based multiple periodic disturbance compensation for long dead-time processes
NASA Astrophysics Data System (ADS)
Tan, Fang; Li, Han-Xiong; Shen, Ping
2018-05-01
Many disturbance rejection methods have been proposed for processes with dead-time, while these existing methods may not work well under multiple periodic disturbances. In this paper, a multiple periodic disturbance rejection is proposed under the Smith predictor configuration for processes with long dead-time. One feedback loop is added to compensate periodic disturbance while retaining the advantage of the Smith predictor. With information of the disturbance spectrum, the added feedback loop can remove multiple periodic disturbances effectively. The robust stability can be easily maintained through the rigorous analysis. Finally, simulation examples demonstrate the effectiveness and robustness of the proposed method for processes with long dead-time.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Research on the processing technology of elongated holes based on rotary ultrasonic drilling
NASA Astrophysics Data System (ADS)
Tong, Yi; Chen, Jianhua; Sun, Lipeng; Yu, Xin; Wang, Xin
2014-08-01
The optical glass is hard, brittle and difficult to process. Based on the method of rotating ultrasonic drilling, the study of single factor on drilling elongated holes was made in optical glass. The processing equipment was DAMA ultrasonic machine, and the machining tools were electroplated with diamond. Through the detection and analysis on the processing quality and surface roughness, the process parameters (the spindle speed, amplitude, feed rate) of rotary ultrasonic drilling were researched, and the influence of processing parameters on surface roughness was obtained, which will provide reference and basis for the actual processing.
FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.
Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan
2017-07-01
In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).
Cell disruption and lipid extraction for microalgal biorefineries: A review.
Lee, Soo Youn; Cho, Jun Muk; Chang, Yong Keun; Oh, You-Kwan
2017-11-01
The microalgae-based biorefinement process has attracted much attention from academic and industrial researchers attracted to its biofuel, food and nutraceutical applications. In this paper, recent developments in cell-disruption and lipid-extraction methods, focusing on four biotechnologically important microalgal species (namely, Chlamydomonas, Haematococcus, Chlorella, and Nannochloropsis spp.), are reviewed. The structural diversity and rigidity of microalgal cell walls complicate the development of efficient downstream processing methods for cell-disruption and subsequent recovery of intracellular lipid and pigment components. Various mechanical, chemical and biological cell-disruption methods are discussed in detail and compared based on microalgal species and status (wet/dried), scale, energy consumption, efficiency, solvent extraction, and synergistic combinations. The challenges and prospects of the downstream processes for the future development of eco-friendly and economical microalgal biorefineries also are outlined herein. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
Musical rhythm and reading development: does beat processing matter?
Ozernov-Palchik, Ola; Patel, Aniruddh D
2018-05-20
There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development. © 2018 New York Academy of Sciences.
Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift
NASA Astrophysics Data System (ADS)
Modarresi, N.; Rezakhah, S.
The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).
Welding abilities of UFG metals
NASA Astrophysics Data System (ADS)
Morawiński, Łukasz; Chmielewski, Tomasz; Olejnik, Lech; Buffa, Gianluca; Campanella, Davide; Fratini, Livan
2018-05-01
Ultrafine Grained (UFG) metals are characterized by an average grain size of <1 µm and mostly high angle grain boundaries. These materials exhibit exceptional improvements in strength, superplastic behaviour and in some cases enhanced biocompatibility. UFG metals barstock can be fabricated effectively by means of Severe Plastic Deformation (SPD) methods. However, the obtained welded joints with similar properties to the base of UFG material are crucial for the production of finished engineering components. Conventional welding methods based on local melting of the joined edges cannot be used due to the UFG microstructure degradation caused by the heat occurrence in the heat affected zone. Therefore, the possibility of obtaining UFG materials joints with different shearing plane (SP) positions by means of friction welded processes, which do not exceed the melting temperature during the process, should be investigated. The article focuses on the Linear Friction Welding (LFW) method, which belongs to innovative welding processes based on mixing of the friction-heated material in the solid state. LFW is a welding process used to joint bulk components. In the process, the friction forces work due to the high frequency oscillation and the pressure between the specimens is converted in thermal energy. Character and range of recrystallization can be controlled by changing LFW parameters. Experimental study on the welded UFG 1070 aluminum alloy by means of FLW method, indicates the possibility of reducing the UFG structure degradation in the obtained joint. A laboratory designed LFW machine has been used to weld the specimens with different contact pressure and oscillation frequency.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
The effect of individually-induced processes on image-based overlay and diffraction-based overlay
NASA Astrophysics Data System (ADS)
Oh, SeungHwa; Lee, Jeongjin; Lee, Seungyoon; Hwang, Chan; Choi, Gilheyun; Kang, Ho-Kyu; Jung, EunSeung
2014-04-01
In this paper, set of wafers with separated processes was prepared and overlay measurement result was compared in two methods; IBO and DBO. Based on the experimental result, theoretical approach of relationship between overlay mark deformation and overlay variation is presented. Moreover, overlay reading simulation was used in verification and prediction of overlay variation due to deformation of overlay mark caused by induced processes. Through this study, understanding of individual process effects on overlay measurement error is given. Additionally, guideline of selecting proper overlay measurement scheme for specific layer is presented.
Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.
Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming
2018-05-01
In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.
A Software Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.
2007-01-01
Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
Howson, Moira; Ritchie, Linda; Carter, Philip D; Parry, David Tudor; Koziol-McLain, Jane
2016-01-01
Background The use of Web-based interventions to deliver mental health and behavior change programs is increasingly popular. They are cost-effective, accessible, and generally effective. Often these interventions concern psychologically sensitive and challenging issues, such as depression or anxiety. The process by which a person receives and experiences therapy is important to understanding therapeutic process and outcomes. While the experience of the patient or client in traditional face-to-face therapy has been evaluated in a number of ways, there appeared to be a gap in the evaluation of patient experiences of therapeutic interventions delivered online. Evaluation of Web-based artifacts has focused either on evaluation of experience from a computer Web-design perspective through usability testing or on evaluation of treatment effectiveness. Neither of these methods focuses on the psychological experience of the person while engaged in the therapeutic process. Objective This study aimed to investigate what methods, if any, have been used to evaluate the in situ psychological experience of users of Web-based self-help psychosocial interventions. Methods A systematic literature review was undertaken of interdisciplinary databases with a focus on health and computer sciences. Studies that met a predetermined search protocol were included. Results Among 21 studies identified that examined psychological experience of the user, only 1 study collected user experience in situ. The most common method of understanding users’ experience was through semistructured interviews conducted posttreatment or questionnaires administrated at the end of an intervention session. The questionnaires were usually based on standardized tools used to assess user experience with traditional face-to-face treatment. Conclusions There is a lack of methods specified in the literature to evaluate the interface between Web-based mental health or behavior change artifacts and users. Main limitations in the research were the nascency of the topic and cross-disciplinary nature of the field. There is a need to develop and deliver methods of understanding users’ psychological experiences while using an intervention. PMID:27363519
Fabrication of starch-based microparticles by an emulsification-crosslinking method
USDA-ARS?s Scientific Manuscript database
Starch-based microparticles (MPs) fabricated by a water-in-water (w/w) emulsification-crosslinking method could be used as a controlled-release delivery vehicle for food bioactives. Due to the processing route without the use of toxic organic solvents, it is expected that these microparticles can be...
Problem-Based Learning and Structural Redesign in a Choral Methods Course
ERIC Educational Resources Information Center
Freer, Patrick
2017-01-01
This article describes the process of structural redesign of an undergraduate music education choral methods course. A framework incorporating Problem-based Learning was developed to promote individualized student learning. Ten students participated in the accompanying research study, contributing an array of written and spoken comments as well as…
USDA-ARS?s Scientific Manuscript database
Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Three-dimensional motor schema based navigation
NASA Technical Reports Server (NTRS)
Arkin, Ronald C.
1989-01-01
Reactive schema-based navigation is possible in space domains by extending the methods developed for ground-based navigation found within the Autonomous Robot Architecture (AuRA). Reformulation of two dimensional motor schemas for three dimensional applications is a straightforward process. The manifold advantages of schema-based control persist, including modular development, amenability to distributed processing, and responsiveness to environmental sensing. Simulation results show the feasibility of this methodology for space docking operations in a cluttered work area.
[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Integration of mask and silicon metrology in DFM
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Mito, Hiroaki; Sugiyama, Akiyuki; Toyoda, Yasutaka
2009-03-01
We have developed a highly integrated method of mask and silicon metrology. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. We have inspected the high accuracy, stability and reproducibility in the experiments of integration. The accuracy is comparable with that of the mask and silicon CD-SEM metrology. In this report, we introduce the experimental results and the application. As shrinkage of design rule for semiconductor device advances, OPC (Optical Proximity Correction) goes aggressively dense in RET (Resolution Enhancement Technology). However, from the view point of DFM (Design for Manufacturability), the cost of data process for advanced MDP (Mask Data Preparation) and mask producing is a problem. Such trade-off between RET and mask producing is a big issue in semiconductor market especially in mask business. Seeing silicon device production process, information sharing is not completely organized between design section and production section. Design data created with OPC and MDP should be linked to process control on production. But design data and process control data are optimized independently. Thus, we provided a solution of DFM: advanced integration of mask metrology and silicon metrology. The system we propose here is composed of followings. 1) Design based recipe creation: Specify patterns on the design data for metrology. This step is fully automated since they are interfaced with hot spot coordinate information detected by various verification methods. 2) Design based image acquisition: Acquire the images of mask and silicon automatically by a recipe based on the pattern design of CD-SEM.It is a robust automated step because a wide range of design data is used for the image acquisition. 3) Contour profiling and GDS data generation: An image profiling process is applied to the acquired image based on the profiling method of the field proven CD metrology algorithm. The detected edges are then converted to GDSII format, which is a standard format for a design data, and utilized for various DFM systems such as simulation. Namely, by integrating pattern shapes of mask and silicon formed during a manufacturing process into GDSII format, it makes it possible to bridge highly accurate pattern profile information over to the design field of various EDA systems. These are fully integrated into design data and automated. Bi-directional cross probing between mask data and process control data is allowed by linking them. This method is a solution for total optimization that covers Design, MDP, mask production and silicon device producing. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.
[A novel method based on Y-shaped cotton-polyester thread microfluidic channel].
Wang, Lu; Shi, Yan-ru; Yan, Hong-tao
2014-08-01
A novel method based on Y-shaped microfluidic channel was firstly proposed in this study. The microfluidic channel was made of two cotton-polyester threads based on the capillary effect of cotton-polyester threads for the determination solutions. A special device was developed to fix the Y-shaped microfluidic channel by ourselves, through which the length and the tilt angle of the channel can be adjusted as requested. The spectrophotometry was compared with Scan-Adobe Photoshop software processing method. The former had a lower detection limit while the latter showed advantages in both convenience and fast operations and lower amount of samples. The proposed method was applied to the determination of nitrite. The linear ranges and detection limits are 1.0-70 micromol x L(-1), 0.66 micromol x L(-1) (spectrophotometry) and 50-450 micromol x L(-1), 45.10 micromol x L(-1) (Scan-Adobe Photoshop software processing method) respectively. This method has been successfully used to the determination of nitrite in soil samples and moat water with recoveries between 96.7% and 104%. It was proved that the proposed method was a low-cost, rapid and convenient analytical method with extensive application prospect.
Inference of the sparse kinetic Ising model using the decimation method
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Zhang, Pan
2015-05-01
In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
UOE Pipe Manufacturing Process Simulation: Equipment Designing and Construction
NASA Astrophysics Data System (ADS)
Delistoian, Dmitri; Chirchor, Mihael
2017-12-01
UOE pipe manufacturing process influence directly on pipeline resilience and operation capacity. At present most spreaded pipe manufacturing method is UOE. This method is based on cold forming. After each technological step appears a certain stress and strain level. For pipe stress strain study is designed and constructed special equipment that simulate entire technological process.UOE pipe equipment is dedicated for manufacturing of longitudinally submerged arc welded DN 400 (16 inch) steel pipe.
The modeling of MMI structures for signal processing applications
NASA Astrophysics Data System (ADS)
Le, Thanh Trung; Cahill, Laurence W.
2008-02-01
Microring resonators are promising candidates for photonic signal processing applications. However, almost all resonators that have been reported so far use directional couplers or 2×2 multimode interference (MMI) couplers as the coupling element between the ring and the bus waveguides. In this paper, instead of using 2×2 couplers, novel structures for microring resonators based on 3×3 MMI couplers are proposed. The characteristics of the device are derived using the modal propagation method. The device parameters are optimized by using numerical methods. Optical switches and filters using Silicon on Insulator (SOI) then have been designed and analyzed. This device can become a new basic component for further applications in optical signal processing. The paper concludes with some further examples of photonic signal processing circuits based on MMI couplers.
On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hsieh, Shih-Fu
1990-01-01
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.
Hierarchical semi-numeric method for pairwise fuzzy group decision making.
Marimin, M; Umano, M; Hatono, I; Tamura, H
2002-01-01
Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.
Mutual information based feature selection for medical image retrieval
NASA Astrophysics Data System (ADS)
Zhi, Lijia; Zhang, Shaomin; Li, Yan
2018-04-01
In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.
Crop Row Detection in Maize Fields Inspired on the Human Visual Perception
Romeo, J.; Pajares, G.; Montalvo, M.; Guerrero, J. M.; Guijarro, M.; Ribeiro, A.
2012-01-01
This paper proposes a new method, oriented to image real-time processing, for identifying crop rows in maize fields in the images. The vision system is designed to be installed onboard a mobile agricultural vehicle, that is, submitted to gyros, vibrations, and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of two main processes: image segmentation and crop row detection. The first one applies a threshold to separate green plants or pixels (crops and weeds) from the rest (soil, stones, and others). It is based on a fuzzy clustering process, which allows obtaining the threshold to be applied during the normal operation process. The crop row detection applies a method based on image perspective projection that searches for maximum accumulation of segmented green pixels along straight alignments. They determine the expected crop lines in the images. The method is robust enough to work under the above-mentioned undesired effects. It is favorably compared against the well-tested Hough transformation for line detection. PMID:22623899
An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes
NASA Astrophysics Data System (ADS)
Ding, Deng; Chong U, Sio
2010-05-01
An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.
High gain durable anti-reflective coating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maghsoodi, Sina; Brophy, Brenor L.; Colson, Thomas E.
Disclosed herein are polysilsesquioxane-based anti-reflective coating (ARC) compositions, methods of preparation, and methods of deposition on a substrate. In one embodiment, the polysilsesquioxane of this disclosure is prepared in a two-step process of acid catalyzed hydrolysis of organoalkoxysilane followed by addition of tetralkoxysilane that generates silicone polymers with >40 mol % silanol based on Si-NMR. These high silanol siloxane polymers are stable and have a long shelf-life in polar organic solvents at room temperature. Also disclosed are low refractive index ARC made from these compositions with and without additives such as porogens, templates, thermal radical initiator, photo radical initiators, crosslinkers,more » Si--OH condensation catalyst and nano-fillers. Also disclosed are methods and apparatus for applying coatings to flat substrates including substrate pre-treatment processes, coating processes and coating curing processes including skin-curing using hot-air knives. Also disclosed are coating compositions and formulations for highly tunable, durable, highly abrasion-resistant functionalized anti-reflective coatings.« less
Using fuzzy fractal features of digital images for the material surface analisys
NASA Astrophysics Data System (ADS)
Privezentsev, D. G.; Zhiznyakov, A. L.; Astafiev, A. V.; Pugin, E. V.
2018-01-01
Edge detection is an important task in image processing. There are a lot of approaches in this area: Sobel, Canny operators and others. One of the perspective techniques in image processing is the use of fuzzy logic and fuzzy sets theory. They allow us to increase processing quality by representing information in its fuzzy form. Most of the existing fuzzy image processing methods switch to fuzzy sets on very late stages, so this leads to some useful information loss. In this paper, a novel method of edge detection based on fuzzy image representation and fuzzy pixels is proposed. With this approach, we convert the image to fuzzy form on the first step. Different approaches to this conversion are described. Several membership functions for fuzzy pixel description and requirements for their form and view are given. A novel approach to edge detection based on Sobel operator and fuzzy image representation is proposed. Experimental testing of developed method was performed on remote sensing images.
Bridging the gap between finance and clinical operations with activity-based cost management.
Storfjell, J L; Jessup, S
1996-12-01
Activity-based cost management (ABCM) is an exciting management tool that links financial information with operations. By determining the costs of specific activities and processes, nurse managers accurately determine true costs of services more accurately than traditional cost accounting methods, and then can target processes for improvement and monitor them for change and improvement. The authors describe the ABCM process applied to nursing management situations.
Extraction and purification methods in downstream processing of plant-based recombinant proteins.
Łojewska, Ewelina; Kowalczyk, Tomasz; Olejniczak, Szymon; Sakowicz, Tomasz
2016-04-01
During the last two decades, the production of recombinant proteins in plant systems has been receiving increased attention. Currently, proteins are considered as the most important biopharmaceuticals. However, high costs and problems with scaling up the purification and isolation processes make the production of plant-based recombinant proteins a challenging task. This paper presents a summary of the information regarding the downstream processing in plant systems and provides a comprehensible overview of its key steps, such as extraction and purification. To highlight the recent progress, mainly new developments in the downstream technology have been chosen. Furthermore, besides most popular techniques, alternative methods have been described. Copyright © 2015 Elsevier Inc. All rights reserved.
Vanegas, Fernando; Bratanov, Dmitry; Powell, Kevin; Weiss, John; Gonzalez, Felipe
2018-01-17
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used-the sensors, the UAV, and the flight operations-the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications.
NASA Astrophysics Data System (ADS)
Yang, Shuo; Fu, Yun; Wang, Xiuteng; Xu, Bingsheng; Li, Zheng
2017-11-01
Eco-design is an advanced design approach which plays an important part in the national innovation project and serves as a key point for the successful transformation of the supply structure. However, the practical implementation of the pro-environmental designs and technologies always faces a dilemma situation, where some processes can effectively control their emissions to protect the environment at relatively high costs, while others pursue the individual interest in making profit by ignoring the possible adverse environmental impacts. Thus, the assessment on the eco-design process must be carried out based on the comprehensive consideration of the economic and environmental aspects. Presently, the assessment systems in China are unable to fully reflect the new environmental technologies regarding their innovative features or performance. Most of the assessment systems adopt scoring method based on the judgments of the experts, which are easy to use but somewhat subjective. The assessment method presented in this paper includes the environmental impact (EI) assessment based on LCA principal and willingness-to-pay theory, and economic profit (EP) assessment mainly based on market price. The results from the assessment are in the form of EI/EP, which evaluate the targeted process from a combined perspective of environmental and economic performance. A case study was carried out upon the utilization process of coal fly ash, which indicates the proposed method can compare different technical processes in an effective and objective manner, and provide explicit and insightful suggestions for decision making.
Process-Based Governance in Public Administrations Using Activity-Based Costing
NASA Astrophysics Data System (ADS)
Becker, Jörg; Bergener, Philipp; Räckers, Michael
Decision- and policy-makers in public administrations currently lack on missing relevant information for sufficient governance. In Germany the introduction of New Public Management and double-entry accounting enable public administrations to get the opportunity to use cost-centered accounting mechanisms to establish new governance mechanisms. Process modelling in this case can be a useful instrument to help the public administrations decision- and policy-makers to structure their activities and capture relevant information. In combination with approaches like Activity-Based Costing, higher management level can be supported with a reasonable data base for fruitful and reasonable governance approaches. Therefore, the aim of this article is combining the public sector domain specific process modelling method PICTURE and concept of activity-based costing for supporting Public Administrations in process-based Governance.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
Replication of a Continuing Education Workshop in the Evidence-Based Practice Process
ERIC Educational Resources Information Center
Gromoske, Andrea N.; Berger, Lisa K.
2017-01-01
Objective: To replicate the results of Parrish and Rubin's continuing education workshop in the evidence-based practice (EBP) process utilizing different workshop facilitators with participants in a different geographic location. Methods: We used a replicated, one-group pretest-posttest design with 3-month follow-up to evaluate the effectiveness…
The Psychometric Properties of the Swedish Version of the EB Process Assessment Scale
ERIC Educational Resources Information Center
Nyström, Siv; Åhsberg, Elizabeth
2016-01-01
Objective: This study examines whether the psychometric properties of the short version of the Evidence-Based Practice Process Assessment Scale (EBPPAS) remain satisfactory when translated and transferred to the context of Swedish welfare services. Method: The Swedish version of EBPPAS was tested on a sample of community-based professionals in…
Joining precipitation-hardened nickel-base alloys by friction welding
NASA Technical Reports Server (NTRS)
Moore, T. J.
1972-01-01
Solid state deformation welding process, friction welding, has been developed for joining precipitation hardened nickel-base alloys and other gamma prime-strengthened materials which heretofore have been virtually unweldable. Method requires rotation of one of the parts to be welded, but where applicable, it is an ideal process for high volume production jobs.
Original Research and Peer Review Using Web-Based Collaborative Tools by College Students
ERIC Educational Resources Information Center
Cakir, Mustafa; Carlsen, William S.
2007-01-01
The Environmental Inquiry program supports inquiry based, student-centered science teaching on selected topics in the environmental sciences. Many teachers are unfamiliar with both the underlying science of toxicology, and the process and importance of peer review in scientific method. The protocol and peer review process was tested with college…
Grading Homework to Emphasize Problem-Solving Process Skills
ERIC Educational Resources Information Center
Harper, Kathleen A.
2012-01-01
This article describes a grading approach that encourages students to employ particular problem-solving skills. Some strengths of this method, called "process-based grading," are that it is easy to implement, requires minimal time to grade, and can be used in conjunction with either an online homework delivery system or paper-based homework.
ERIC Educational Resources Information Center
Rubin, Allen; Parrish, Danielle E.
2010-01-01
Objective: This report describes the development and preliminary findings regarding the reliability, validity, and sensitivity of a scale that has been developed to assess practitioners' perceived familiarity with, attitudes about, and implementation of the phases of the evidence-based practice (EBP) process. Method: After a panel of national…
2018-01-01
Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642
A data-driven multiplicative fault diagnosis approach for automation processes.
Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo
2014-09-01
This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.