Sample records for proposed method involves

  1. Proposal and Validation of an Entrepreneur Competency Profile: Implications for Education

    ERIC Educational Resources Information Center

    Alda-Varas, Rodrigo; Villardon-Gallego, Lourdes; Elexpuru-Albizuri, Itziar

    2012-01-01

    Introduction: This research presents the validated proposal of an entrepreneur competency profile. We analyzed the phases of the entrepreneurial process, and the functions involved in each of them, in order to identify the tasks involved in each function/role and consequently the specific competencies of entrepreneurs. Method: The proposal was…

  2. A Fuzzy-Based Control Method for Smoothing Power Fluctuations in Substations along High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Sugio, Tetsuya; Yamamoto, Masayoshi; Funabiki, Shigeyuki

    The use of an SMES (Superconducting Magnetic Energy Storage) for smoothing power fluctuations in a railway substation has been discussed. This paper proposes a smoothing control method based on fuzzy reasoning for reducing the SMES capacity at substations along high-speed railways. The proposed smoothing control method comprises three countermeasures for reduction of the SMES capacity. The first countermeasure involves modification of rule 1 for smoothing out the fluctuating electric power to its average value. The other countermeasures involve the modification of the central value of the stored energy control in the SMES and revision of the membership function in rule 2 for reduction of the SMES capacity. The SMES capacity in the proposed smoothing control method is reduced by 49.5% when compared to that in the nonrevised control method. It is confirmed by computer simulations that the proposed control method is suitable for smoothing out power fluctuations in substations along high-speed railways and for reducing the SMES capacity.

  3. 76 FR 62862 - Notice; Applications and Amendments to Facility Operating Licenses Involving Proposed No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    ... Operating Licenses Involving Proposed No Significant Hazards Considerations and Containing Sensitive Unclassified Non-Safeguards Information and Order Imposing Procedures for Access to Sensitive Unclassified Non... document. You may submit comments by any one of the following methods: Federal Rulemaking Web Site: Go to...

  4. Effective public involvement in the HoST-D Programme for dementia home care support: From proposal and design to methods of data collection (innovative practice).

    PubMed

    Giebel, Clarissa; Roe, Brenda; Hodgson, Anthony; Britt, David; Clarkson, Paul

    2017-01-01

    Public involvement is an important element in health and social care research. However, it is little evaluated in research. This paper discusses the utility and impact of public involvement of carers and people with dementia in a five-year programme on effective home support in dementia, from proposal and design to methods of data collection, and provides a useful guide for future research on how to effectively involve the public. The Home SupporT in Dementia (HoST-D) Programme comprises two elements of public involvement, a small reference group and a virtual lay advisory group. Involving carers and people with dementia is based on the six key values of involvement - respect, support, transparency, responsiveness, fairness of opportunity, and accountability. Carers and people with dementia gave opinions on study information, methods of data collection, an economic model, case vignettes, and a memory aid booklet, which were all taken into account. Public involvement has provided benefits to the programme whilst being considerate of the time constraints and geographical locations of members.

  5. Optimal PMU placement using topology transformation method in power systems.

    PubMed

    Rahman, Nadia H A; Zobaa, Ahmed F

    2016-09-01

    Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  6. High-throughput ocular artifact reduction in multichannel electroencephalography (EEG) using component subspace projection.

    PubMed

    Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir

    2011-03-15

    After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Chromatic dispersion estimation based on heterodyne detection for coherent optical communication systems

    NASA Astrophysics Data System (ADS)

    Li, Yong; Yang, Aiying; Guo, Peng; Qiao, Yaojun; Lu, Yueming

    2018-01-01

    We propose an accurate and nondata-aided chromatic dispersion (CD) estimation method involving the use of the cross-correlation function of two heterodyne detection signals for coherent optical communication systems. Simulations are implemented to verify the feasibility of the proposed method for 28-GBaud coherent systems with different modulation formats. The results show that the proposed method has high accuracy for measuring CD and has good robustness against laser phase noise, amplified spontaneous emission noise, and nonlinear impairments.

  9. A new method for tracking organ motion on diagnostic ultrasound images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubota, Yoshiki, E-mail: y-kubota@gunma-u.ac.jp; Matsumura, Akihiko, E-mail: matchan.akihiko@gunma-u.ac.jp; Fukahori, Mai, E-mail: fukahori@nirs.go.jp

    2014-09-15

    Purpose: Respiratory-gated irradiation is effective in reducing the margins of a target in the case of abdominal organs, such as the liver, that change their position as a result of respiratory motion. However, existing technologies are incapable of directly measuring organ motion in real-time during radiation beam delivery. Hence, the authors proposed a novel quantitative organ motion tracking method involving the use of diagnostic ultrasound images; it is noninvasive and does not entail radiation exposure. In the present study, the authors have prospectively evaluated this proposed method. Methods: The method involved real-time processing of clinical ultrasound imaging data rather thanmore » organ monitoring; it comprised a three-dimensional ultrasound device, a respiratory sensing system, and two PCs for data storage and analysis. The study was designed to evaluate the effectiveness of the proposed method by tracking the gallbladder in one subject and a liver vein in another subject. To track a moving target organ, the method involved the control of a region of interest (ROI) that delineated the target. A tracking algorithm was used to control the ROI, and a large number of feature points and an error correction algorithm were used to achieve long-term tracking of the target. Tracking accuracy was assessed in terms of how well the ROI matched the center of the target. Results: The effectiveness of using a large number of feature points and the error correction algorithm in the proposed method was verified by comparing it with two simple tracking methods. The ROI could capture the center of the target for about 5 min in a cross-sectional image with changing position. Indeed, using the proposed method, it was possible to accurately track a target with a center deviation of 1.54 ± 0.9 mm. The computing time for one frame image using our proposed method was 8 ms. It is expected that it would be possible to track any soft-tissue organ or tumor with large deformations and changing cross-sectional position using this method. Conclusions: The proposed method achieved real-time processing and continuous tracking of the target organ for about 5 min. It is expected that our method will enable more accurate radiation treatment than is the case using indirect observational methods, such as the respiratory sensor method, because of direct visualization of the tumor. Results show that this tracking system facilitates safe treatment in clinical practice.« less

  10. Evaluating Attitudes, Skill, and Performance in a Learning-Enhanced Quantitative Methods Course: A Structural Modeling Approach.

    ERIC Educational Resources Information Center

    Harlow, Lisa L.; Burkholder, Gary J.; Morrow, Jennifer A.

    2002-01-01

    Used a structural modeling approach to evaluate relations among attitudes, initial skills, and performance in a Quantitative Methods course that involved students in active learning. Results largely confirmed hypotheses offering support for educational reform efforts that propose actively involving students in the learning process, especially in…

  11. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  12. 78 FR 25473 - Information Collection: Northern Alaska Native Community Surveys; Proposed Collection for OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-01

    ... survey research methods that involve residents of four communities most proximate to proposed exploration... communities. Survey Instruments: The research will be collected from two voluntary surveys. The Resilience... Collection: Northern Alaska Native Community Surveys; Proposed Collection for OMB Review; Comment Request...

  13. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  14. 75 FR 53968 - Reverb Communications, Inc.; Analysis of Proposed Consent Order To Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... final the agreement's proposed order. This matter involves the public relations, marketing, and sales... Consent Order To Aid Public Comment AGENCY: Federal Trade Commission. ACTION: Proposed Consent Agreement... or deceptive acts or practices or unfair methods of competition. The attached Analysis to Aid Public...

  15. Feasibility study of complex wavefield retrieval in off-axis acoustic holography employing an acousto-optic sensor

    PubMed Central

    Rodríguez, Guillermo López; Weber, Joshua; Sandhu, Jaswinder Singh; Anastasio, Mark A.

    2011-01-01

    We propose and experimentally demonstrate a new method for complex-valued wavefield retrieval in off-axis acoustic holography. The method involves use of an intensity-sensitive acousto-optic (AO) sensor, optimized for use at 3.3 MHz, to record the acoustic hologram and a computational method for reconstruction of the object wavefield. The proposed method may circumvent limitations of conventional implementations of acoustic holography and may facilitate the development of acoustic-holography-based biomedical imaging methods. PMID:21669451

  16. A Sociotechnical Systems Approach To Coastal Marine Spatial Planning

    DTIC Science & Technology

    2016-12-01

    the authors followed the MEAD step of identifying variances and creating a matrix of these variances. Then the authors were able to propose methods ...potential politics involved, and the risks involved in proposing and attempting to start up a new marine aquaculture operation. 69 Figure 16. Role...10 16. DLNR Board Responsiveness/Review Time 17. Assessment Value Redesign Suggestions • Have a coordinating group or person (with knowledge

  17. Prediction of nocturnal hypoglycemia by an aggregation of previously known prediction approaches: proof of concept for clinical application.

    PubMed

    Tkachenko, Pavlo; Kriukova, Galyna; Aleksandrova, Marharyta; Chertov, Oleg; Renard, Eric; Pereverzyev, Sergei V

    2016-10-01

    Nocturnal hypoglycemia (NH) is common in patients with insulin-treated diabetes. Despite the risk associated with NH, there are only a few methods aiming at the prediction of such events based on intermittent blood glucose monitoring data and none has been validated for clinical use. Here we propose a method of combining several predictors into a new one that will perform at the level of the best involved one, or even outperform all individual candidates. The idea of the method is to use a recently developed strategy for aggregating ranking algorithms. The method has been calibrated and tested on data extracted from clinical trials, performed in the European FP7-funded project DIAdvisor. Then we have tested the proposed approach on other datasets to show the portability of the method. This feature of the method allows its simple implementation in the form of a diabetic smartphone app. On the considered datasets the proposed approach exhibits good performance in terms of sensitivity, specificity and predictive values. Moreover, the resulting predictor automatically performs at the level of the best involved method or even outperforms it. We propose a strategy for a combination of NH predictors that leads to a method exhibiting a reliable performance and the potential for everyday use by any patient who performs self-monitoring of blood glucose. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  19. In My Opinion. A New Sign Conversion for Geometrical Optics.

    ERIC Educational Resources Information Center

    Ditteon, Richard

    1993-01-01

    Introduces a new sign convention for the object and image distances involving mirrors and lenses. Proposes that the method is easier for students to understand and remember and that it helps clarify the physics concepts involved. (MDH)

  20. Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya

    2017-12-01

    We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.

  1. Microfabrication Method using a Combination of Local Ion Implantation and Magnetorheological Finishing

    NASA Astrophysics Data System (ADS)

    Han, Jin; Kim, Jong-Wook; Lee, Hiwon; Min, Byung-Kwon; Lee, Sang Jo

    2009-02-01

    A new microfabrication method that combines localized ion implantation and magnetorheological finishing is proposed. The proposed technique involves two steps. First, selected regions of a silicon wafer are irradiated with gallium ions by using a focused ion beam system. The mechanical properties of the irradiated regions are altered as a result of the ion implantation. Second, the wafer is processed by using a magnetorheological finishing method. During the finishing process, the regions not implanted with ion are preferentially removed. The material removal rate difference is utilized for microfabrication. The mechanisms of the proposed method are discussed, and applications are presented.

  2. On solving wave equations on fixed bounded intervals involving Robin boundary conditions with time-dependent coefficients

    NASA Astrophysics Data System (ADS)

    van Horssen, Wim T.; Wang, Yandong; Cao, Guohua

    2018-06-01

    In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.

  3. Pythagorean fuzzy analytic hierarchy process to multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Mohd, Wan Rosanisah Wan; Abdullah, Lazim

    2017-11-01

    A numerous approaches have been proposed in the literature to determine the criteria of weight. The weight of criteria is very significant in the process of decision making. One of the outstanding approaches that used to determine weight of criteria is analytic hierarchy process (AHP). This method involves decision makers (DMs) to evaluate the decision to form the pair-wise comparison between criteria and alternatives. In classical AHP, the linguistic variable of pairwise comparison is presented in terms of crisp value. However, this method is not appropriate to present the real situation of the problems because it involved the uncertainty in linguistic judgment. For this reason, AHP has been extended by incorporating the Pythagorean fuzzy sets. In addition, no one has found in the literature proposed how to determine the weight of criteria using AHP under Pythagorean fuzzy sets. In order to solve the MCDM problem, the Pythagorean fuzzy analytic hierarchy process is proposed to determine the criteria weight of the evaluation criteria. Using the linguistic variables, pairwise comparison for evaluation criteria are made to the weights of criteria using Pythagorean fuzzy numbers (PFNs). The proposed method is implemented in the evaluation problem in order to demonstrate its applicability. This study shows that the proposed method provides us with a useful way and a new direction in solving MCDM problems with Pythagorean fuzzy context.

  4. A hybrid technique for speech segregation and classification using a sophisticated deep neural network

    PubMed Central

    Nawaz, Tabassam; Mehmood, Zahid; Rashid, Muhammad; Habib, Hafiz Adnan

    2018-01-01

    Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods. PMID:29558485

  5. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  6. Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks

    PubMed Central

    Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto

    2007-01-01

    The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.

  7. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  8. 75 FR 1650 - Notice of Intent To Prepare an Environmental Impact Statement for the Proposed HB Potash, LLC-“In...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ...] Notice of Intent To Prepare an Environmental Impact Statement for the Proposed HB Potash, LLC--``In-Situ... HB Potash, LLC--``In- Situ'' Solution Mine Project by any of the following methods: E-mail: Rebecca..., (Intrepid) is proposing to construct and operate an ``in-situ'' solution mining project that would involve...

  9. 32 CFR 219.110 - Expedited review procedures for certain kinds of research involving no more than minimal risk...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... adopt a method for keeping all members advised of research proposals which have been approved under the... research involving no more than minimal risk, and for minor changes in approved research. 219.110 Section... kinds of research involving no more than minimal risk, and for minor changes in approved research. (a...

  10. 32 CFR 219.110 - Expedited review procedures for certain kinds of research involving no more than minimal risk...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... adopt a method for keeping all members advised of research proposals which have been approved under the... research involving no more than minimal risk, and for minor changes in approved research. 219.110 Section... kinds of research involving no more than minimal risk, and for minor changes in approved research. (a...

  11. Aging Students: Implications for Intergenerational Development in the Classroom.

    ERIC Educational Resources Information Center

    Youngman, Deborah

    1995-01-01

    Outlines current knowledge about integrating older students into the postsecondary educational system, discusses a case study of an established natural learning environment, and presents models of education and development involving older students. A system involving the transposition of dialectical and discourse methods is proposed for moving…

  12. The Ecosystem of Information Retrieval

    ERIC Educational Resources Information Center

    Rodriguez-Munoz, Jose-Vicente; Martinez-Mendez, Francisco-Javier; Pastor-Sanchez, Juan-Antonio

    2012-01-01

    Introduction: This paper presents an initial proposal for a formal framework that, by studying the metric variables involved in information retrieval, can establish the sequence of events involved and how to perform it. Method: A systematic approach from the equations of Shannon and Weaver to establish the decidability of information retrieval…

  13. Measurement of rolling friction by a damped oscillator

    NASA Technical Reports Server (NTRS)

    Dayan, M.; Buckley, D. H.

    1983-01-01

    An experimental method for measuring rolling friction is proposed. The method is mechanically simple. It is based on an oscillator in a uniform magnetic field and does not involve any mechanical forces except for the measured friction. The measured pickup voltage is Fourier analyzed and yields the friction spectral response. The proposed experiment is not tailored for a particular case. Instead, various modes of operation, suitable to different experimental conditions, are discussed.

  14. Personalised Information Services Using a Hybrid Recommendation Method Based on Usage Frequency

    ERIC Educational Resources Information Center

    Kim, Yong; Chung, Min Gyo

    2008-01-01

    Purpose: This paper seeks to describe a personal recommendation service (PRS) involving an innovative hybrid recommendation method suitable for deployment in a large-scale multimedia user environment. Design/methodology/approach: The proposed hybrid method partitions content and user into segments and executes association rule mining,…

  15. CERES: A new cerebellum lobule segmentation method.

    PubMed

    Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V

    2017-02-15

    The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems

    NASA Astrophysics Data System (ADS)

    Ataei, Mohammad; Enshaee, Ali

    2011-12-01

    In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.

  17. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  18. Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks

    PubMed Central

    Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo

    2012-01-01

    Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190

  19. Measuring Gravitation Using Polarization Spectroscopy

    NASA Technical Reports Server (NTRS)

    Matsko, Andrey; Yu, Nan; Maleki, Lute

    2004-01-01

    A proposed method of measuring gravitational acceleration would involve the application of polarization spectroscopy to an ultracold, vertically moving cloud of atoms (an atomic fountain). A related proposed method involving measurements of absorption of light pulses like those used in conventional atomic interferometry would yield an estimate of the number of atoms participating in the interferometric interaction. The basis of the first-mentioned proposed method is that the rotation of polarization of light is affected by the acceleration of atoms along the path of propagation of the light. The rotation of polarization is associated with a phase shift: When an atom moving in a laboratory reference interacts with an electromagnetic wave, the energy levels of the atom are Doppler-shifted, relative to where they would be if the atom were stationary. The Doppler shift gives rise to changes in the detuning of the light from the corresponding atomic transitions. This detuning, in turn, causes the electromagnetic wave to undergo a phase shift that can be measured by conventional means. One would infer the gravitational acceleration and/or the gradient of the gravitational acceleration from the phase measurements.

  20. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  1. Knowledge Discovery from Posts in Online Health Communities Using Unified Medical Language System.

    PubMed

    Chen, Donghua; Zhang, Runtong; Liu, Kecheng; Hou, Lei

    2018-06-19

    Patient-reported posts in Online Health Communities (OHCs) contain various valuable information that can help establish knowledge-based online support for online patients. However, utilizing these reports to improve online patient services in the absence of appropriate medical and healthcare expert knowledge is difficult. Thus, we propose a comprehensive knowledge discovery method that is based on the Unified Medical Language System for the analysis of narrative posts in OHCs. First, we propose a domain-knowledge support framework for OHCs to provide a basis for post analysis. Second, we develop a Knowledge-Involved Topic Modeling (KI-TM) method to extract and expand explicit knowledge within the text. We propose four metrics, namely, explicit knowledge rate, latent knowledge rate, knowledge correlation rate, and perplexity, for the evaluation of the KI-TM method. Our experimental results indicate that our proposed method outperforms existing methods in terms of providing knowledge support. Our method enhances knowledge support for online patients and can help develop intelligent OHCs in the future.

  2. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  4. Shooting method for solution of boundary-layer flows with massive blowing

    NASA Technical Reports Server (NTRS)

    Liu, T.-M.; Nachtsheim, P. R.

    1973-01-01

    A modified, bidirectional shooting method is presented for solving boundary-layer equations under conditions of massive blowing. Unlike the conventional shooting method, which is unstable when the blowing rate increases, the proposed method avoids the unstable direction and is capable of solving complex boundary-layer problems involving mass and energy balance on the surface.

  5. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  6. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  7. Object Transportation by Two Mobile Robots with Hand Carts

    PubMed Central

    Hara, Tatsunori

    2014-01-01

    This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50–60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499

  8. Object Transportation by Two Mobile Robots with Hand Carts.

    PubMed

    Sakuyama, Takuya; Figueroa Heredia, Jorge David; Ogata, Taiki; Hara, Tatsunori; Ota, Jun

    2014-01-01

    This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50-60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement.

  9. A Novel Multilayered RFID Tagged Cargo Integrity Assurance Scheme

    PubMed Central

    Yang, Ming Hour; Luo, Jia Ning; Lu, Shao Yong

    2015-01-01

    To minimize cargo theft during transport, mobile radio frequency identification (RFID) grouping proof methods are generally employed to ensure the integrity of entire cargo loads. However, conventional grouping proofs cannot simultaneously generate grouping proofs for a specific group of RFID tags. The most serious problem of these methods is that nonexistent tags are included in the grouping proofs because of the considerable amount of time it takes to scan a high number of tags. Thus, applying grouping proof methods in the current logistics industry is difficult. To solve this problem, this paper proposes a method for generating multilayered offline grouping proofs. The proposed method provides tag anonymity; moreover, resolving disputes between recipients and transporters over the integrity of cargo deliveries can be expedited by generating grouping proofs and automatically authenticating the consistency between the receipt proof and pick proof. The proposed method can also protect against replay attacks, multi-session attacks, and concurrency attacks. Finally, experimental results verify that, compared with other methods for generating grouping proofs, the proposed method can efficiently generate offline grouping proofs involving several parties in a supply chain using mobile RFID. PMID:26512673

  10. On the use of sibling recurrence risks to select environmental factors liable to interact with genetic risk factors.

    PubMed

    Kazma, Rémi; Bonaïti-Pellié, Catherine; Norris, Jill M; Génin, Emmanuelle

    2010-01-01

    Gene-environment interactions are likely to be involved in the susceptibility to multifactorial diseases but are difficult to detect. Available methods usually concentrate on some particular genetic and environmental factors. In this paper, we propose a new method to determine whether a given exposure is susceptible to interact with unknown genetic factors. Rather than focusing on a specific genetic factor, the degree of familial aggregation is used as a surrogate for genetic factors. A test comparing the recurrence risks in sibs according to the exposure of indexes is proposed and its power is studied for varying values of model parameters. The Exposed versus Unexposed Recurrence Analysis (EURECA) is valuable for common diseases with moderate familial aggregation, only when the role of exposure has been clearly outlined. Interestingly, accounting for a sibling correlation for the exposure increases the power of EURECA. An application on a sample ascertained through one index affected with type 2 diabetes is presented where gene-environment interactions involving obesity and physical inactivity are investigated. Association of obesity with type 2 diabetes is clearly evidenced and a potential interaction involving this factor is suggested in Hispanics (P=0.045), whereas a clear gene-environment interaction is evidenced involving physical inactivity only in non-Hispanic whites (P=0.028). The proposed method might be of particular interest before genetic studies to help determine the environmental risk factors that will need to be accounted for to increase the power to detect genetic risk factors and to select the most appropriate samples to genotype.

  11. Dual-mode nested search method for categorical uncertain multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  12. An Ensemble Framework Coping with Instability in the Gene Selection Process.

    PubMed

    Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M

    2018-03-01

    This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.

  13. Rapid analysis of effluents generated by the dairy industry for fat determination by preconcentration in nylon membranes and attenuated total reflectance infrared spectroscopy measurement.

    PubMed

    Moliner Martínez, Y; Muñoz-Ortuño, M; Herráez-Hernández, R; Campíns-Falcó, P

    2014-02-01

    This paper describes a new approach for the determination of fat in the effluents generated by the dairy industry which is based on the retention of fat in nylon membranes and measurement of the absorbances on the membrane surface by ATR-IR spectroscopy. Different options have been evaluated for retaining fat in the membranes using milk samples of different origin and fat content. Based on the results obtained, a method is proposed for the determination of fat in effluents which involves the filtration of 1 mL of the samples through 0.45 µm nylon membranes of 13 mm diameter. The fat content is then determined by measuring the absorbance of band at 1745 cm(-1). The proposed method can be used for the direct estimation of fat at concentrations in the 2-12 mg/L interval with adequate reproducibility. The intraday precision, expressed as coefficients of variation CVs, were ≤ 11%, whereas the interday CVs were ≤ 20%. The method shows a good tolerance towards conditions typically found in the effluents generated by the dairy industry. The most relevant features of the proposed method are simplicity and speed as the samples can be characterized in a few minutes. Sample preparation does not involve either additional instrumentation (such as pumps or vacuum equipment) or organic solvents or other chemicals. Therefore, the proposed method can be considered a rapid, simple and cost-effective alternative to gravimetric methods for controlling fat content in these effluents during production or cleaning processes. © 2013 Published by Elsevier B.V.

  14. An improved wavelet-Galerkin method for dynamic response reconstruction and parameter identification of shear-type frames

    NASA Astrophysics Data System (ADS)

    Bu, Haifeng; Wang, Dansheng; Zhou, Pin; Zhu, Hongping

    2018-04-01

    An improved wavelet-Galerkin (IWG) method based on the Daubechies wavelet is proposed for reconstructing the dynamic responses of shear structures. The proposed method flexibly manages wavelet resolution level according to excitation, thereby avoiding the weakness of the wavelet-Galerkin multiresolution analysis (WGMA) method in terms of resolution and the requirement of external excitation. IWG is implemented by this work in certain case studies, involving single- and n-degree-of-freedom frame structures subjected to a determined discrete excitation. Results demonstrate that IWG performs better than WGMA in terms of accuracy and computation efficiency. Furthermore, a new method for parameter identification based on IWG and an optimization algorithm are also developed for shear frame structures, and a simultaneous identification of structural parameters and excitation is implemented. Numerical results demonstrate that the proposed identification method is effective for shear frame structures.

  15. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  16. Collaborative study of an enzymatic digestion method for the isolation of light filth from ground beef or hamburger.

    PubMed

    Alioto, P; Andreas, M

    1976-01-01

    Collaborative results are presented for a proposed method for light filth extraction from ground beef or hamburger. The method involves enzymatic digestion, wet sieving, and extraction with light mineral oil from 40% isopropanol. Recoveries are good and filter papers are clean. This method has been adopted as official first action.

  17. Simplified power control method for cellular mobile communication

    NASA Astrophysics Data System (ADS)

    Leung, Y. W.

    1994-04-01

    The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.

  18. Supervised multiblock sparse multivariable analysis with application to multimodal brain imaging genetics.

    PubMed

    Kawaguchi, Atsushi; Yamashita, Fumio

    2017-10-01

    This article proposes a procedure for describing the relationship between high-dimensional data sets, such as multimodal brain images and genetic data. We propose a supervised technique to incorporate the clinical outcome to determine a score, which is a linear combination of variables with hieratical structures to multimodalities. This approach is expected to obtain interpretable and predictive scores. The proposed method was applied to a study of Alzheimer's disease (AD). We propose a diagnostic method for AD that involves using whole-brain magnetic resonance imaging (MRI) and positron emission tomography (PET), and we select effective brain regions for the diagnostic probability and investigate the genome-wide association with the regions using single nucleotide polymorphisms (SNPs). The two-step dimension reduction method, which we previously introduced, was considered applicable to such a study and allows us to partially incorporate the proposed method. We show that the proposed method offers classification functions with feasibility and reasonable prediction accuracy based on the receiver operating characteristic (ROC) analysis and reasonable regions of the brain and genomes. Our simulation study based on the synthetic structured data set showed that the proposed method outperformed the original method and provided the characteristic for the supervised feature. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Biosphere Reserve for All: Potentials for Involving Underrepresented Age Groups in the Development of a Biosphere Reserve through Intergenerational Practice.

    PubMed

    Mitrofanenko, Tamara; Snajdr, Julia; Muhar, Andreas; Penker, Marianne; Schauppenlehner-Kloyber, Elisabeth

    2018-05-22

    Stakeholder participation is of high importance in UNESCO biosphere reserves as model regions for sustainable development; however, certain groups remain underrepresented. The paper proposes Intergenerational Practice (IP) as a means of involving youth and elderly women and explores its options and barriers, using the example of the Salzburger Lungau and Kärntner Nockberge Biosphere Reserve in Austria. Case study analysis is used involving mixed methods. The results reveal obstacles and motivations to participating in biosphere reserve implementation and intergenerational activities for the youth and the elderly women and imply that much potential for IP exists in the biosphere reserve region. The authors propose suitable solutions from the intergenerational field to overcome identified participation obstacles and suggest benefits of incorporating IP as a management tool into biosphere reserve activities. Suggestions for future research include evaluating applications of IP in the context of protected areas, testing of methods used in other contexts, and contribution to theory development.

  20. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  1. Validation of catchment models for predicting land-use and climate change impacts. 1. Method

    NASA Astrophysics Data System (ADS)

    Ewen, J.; Parkin, G.

    1996-02-01

    Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).

  2. Generalized Factorial Moments

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    2004-02-01

    It is shown that the method of eliminating the statistical fluctuations from event-by-event analysis proposed recently by Fu and Liu can be rewritten in a compact form involving the generalized factorial moments.

  3. An adaptive cubature formula for efficient reliability assessment of nonlinear structural dynamic systems

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Kong, Fan

    2018-05-01

    Extreme value distribution (EVD) evaluation is a critical topic in reliability analysis of nonlinear structural dynamic systems. In this paper, a new method is proposed to obtain the EVD. The maximum entropy method (MEM) with fractional moments as constraints is employed to derive the entire range of EVD. Then, an adaptive cubature formula is proposed for fractional moments assessment involved in MEM, which is closely related to the efficiency and accuracy for reliability analysis. Three point sets, which include a total of 2d2 + 1 integration points in the dimension d, are generated in the proposed formula. In this regard, the efficiency of the proposed formula is ensured. Besides, a "free" parameter is introduced, which makes the proposed formula adaptive with the dimension. The "free" parameter is determined by arranging one point set adjacent to the boundary of the hyper-sphere which contains the bulk of total probability. In this regard, the tail distribution may be better reproduced and the fractional moments could be evaluated with accuracy. Finally, the proposed method is applied to a ten-storey shear frame structure under seismic excitations, which exhibits strong nonlinearity. The numerical results demonstrate the efficacy of the proposed method.

  4. Write Another Poem about Marigold: Meaningful Writing as a Process of Change.

    ERIC Educational Resources Information Center

    Teichmann, Sandra Gail

    1995-01-01

    Considers a process approach toward the goal of meaningful writing which may aid in positive personal change. Outlines recent criticism of contemporary poetry; argues against tradition and practice of craft in writing poetry. Proposes a means of writing centered on a method of inquiry involving elements of self-involvement, curiosity, and risk to…

  5. Thin layer chromatographic method for the detection of uric acid: collaborative study.

    PubMed

    Thrasher, J J; Abadie, A

    1978-07-01

    A collaborative study has been completed on an improved method for the detection and confirmation of uric acid from bird and insect excreta. The proposed method involves the lithium carbonate solubilization of the suspect excreta material, followed by butanol-methanol-water-acetic acid thin layer chromatography, and trisodium phosphate-phosphotungstic acid color development. The collaborative tests resulted in 100% detection of uric acid standard at the 50 ng level and 75% detection at the 20-25 ng level. No false positives were reported during tests of compounds similar to uric acid. The proposed method has been adopted official first action; the present official final action method, 44.161, will be retained for screening purposes.

  6. Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua

    2017-09-01

    In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.

  7. Game Methods of Collective Decision Making in Management Consulting.

    ERIC Educational Resources Information Center

    Prigozhin, Arkadii Il'ich

    1991-01-01

    Explores former Soviet management consultants' increased use of social psychological game methods. Identifies such games as means of involving segments of client organizations in accomplishing shared tasks. Proposes a "practical" business game, designed to shape the process of formulating new management decisions at a radical level.…

  8. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  9. A new IRT-based standard setting method: application to eCat-listening.

    PubMed

    García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David

    2013-01-01

    Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.

  10. Reassembling the Information Technology Innovation Process: An Actor Network Theory Method for Managing the Initiation, Production, and Diffusion of Innovations

    NASA Astrophysics Data System (ADS)

    Zendejas, Gerardo; Chiasson, Mike

    This paper will propose and explore a method to enhance focal actors' abilities to enroll and control the many social and technical components interacting during the initiation, production, and diffusion of innovations. The reassembling and stabilizing of such components is the challenging goal of the focal actors involved in these processes. To address this possibility, a healthcare project involving the initiation, production, and diffusion of an IT-based innovation will be influenced by the researcher, using concepts from actor network theory (ANT), within an action research methodology (ARM). The experiences using this method, and the nature of enrolment and translation during its use, will highlight if and how ANT can provide a problem-solving method to help assemble the social and technical actants involved in the diffusion of an innovation. Finally, the paper will discuss the challenges and benefits of implementing such methods to attain widespread diffusion.

  11. Pseudo-orthogonalization of memory patterns for associative memory.

    PubMed

    Oku, Makito; Makino, Takaki; Aihara, Kazuyuki

    2013-11-01

    A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.

  12. A DFFD simulation method combined with the spectral element method for solid-fluid-interaction problems

    NASA Astrophysics Data System (ADS)

    Chen, Li-Chieh; Huang, Mei-Jiau

    2017-02-01

    A 2D simulation method for a rigid body moving in an incompressible viscous fluid is proposed. It combines one of the immersed-boundary methods, the DFFD (direct forcing fictitious domain) method with the spectral element method; the former is employed for efficiently capturing the two-way FSI (fluid-structure interaction) and the geometric flexibility of the latter is utilized for any possibly co-existing stationary and complicated solid or flow boundary. A pseudo body force is imposed within the solid domain to enforce the rigid body motion and a Lagrangian mesh composed of triangular elements is employed for tracing the rigid body. In particular, a so called sub-cell scheme is proposed to smooth the discontinuity at the fluid-solid interface and to execute integrations involving Eulerian variables over the moving-solid domain. The accuracy of the proposed method is verified through an observed agreement of the simulation results of some typical flows with analytical solutions or existing literatures.

  13. State-vector formalism and the Legendre polynomial solution for modelling guided waves in anisotropic plates

    NASA Astrophysics Data System (ADS)

    Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin

    2018-01-01

    We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.

  14. A Generic Deep-Learning-Based Approach for Automated Surface Inspection.

    PubMed

    Ren, Ruoxu; Hung, Terence; Tan, Kay Chen

    2018-03-01

    Automated surface inspection (ASI) is a challenging task in industry, as collecting training dataset is usually costly and related methods are highly dataset-dependent. In this paper, a generic approach that requires small training data for ASI is proposed. First, this approach builds classifier on the features of image patches, where the features are transferred from a pretrained deep learning network. Next, pixel-wise prediction is obtained by convolving the trained classifier over input image. An experiment on three public and one industrial data set is carried out. The experiment involves two tasks: 1) image classification and 2) defect segmentation. The results of proposed algorithm are compared against several best benchmarks in literature. In the classification tasks, the proposed method improves accuracy by 0.66%-25.50%. In the segmentation tasks, the proposed method reduces error escape rates by 6.00%-19.00% in three defect types and improves accuracies by 2.29%-9.86% in all seven defect types. In addition, the proposed method achieves 0.0% error escape rate in the segmentation task of industrial data.

  15. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    PubMed

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  16. Writing Abstracts for MLIS Research Proposals Using Worked Examples: An Innovative Approach to Teaching the Elements of Research Design

    ERIC Educational Resources Information Center

    Ondrusek, Anita L.; Thiele, Harold E.; Yang, Changwoo

    2014-01-01

    The authors examined abstracts written by graduate students for their research proposals as a requirement for a course in research methods in a distance learning MLIS program. The students learned under three instructional conditions that involved varying levels of access to worked examples created from abstracts representing research in the LIS…

  17. Formalized Conflicts Detection Based on the Analysis of Multiple Emails: An Approach Combining Statistics and Ontologies

    NASA Astrophysics Data System (ADS)

    Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel

    In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.

  18. Robust path planning for flexible needle insertion using Markov decision processes.

    PubMed

    Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong

    2018-05-11

    Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.

  19. Results of community deliberation about social impacts of ecological restoration: comparing public input of self-selected versus actively engaged community members.

    PubMed

    Harris, Charles C; Nielsen, Erik A; Becker, Dennis R; Blahna, Dale J; McLaughlin, William J

    2012-08-01

    Participatory processes for obtaining residents' input about community impacts of proposed environmental management actions have long raised concerns about who participates in public involvement efforts and whose interests they represent. This study explored methods of broad-based involvement and the role of deliberation in social impact assessment. Interactive community forums were conducted in 27 communities to solicit public input on proposed alternatives for recovering wild salmon in the Pacific Northwest US. Individuals identified by fellow residents as most active and involved in community affairs ("AE residents") were invited to participate in deliberations about likely social impacts of proposed engineering and ecological actions such as dam removal. Judgments of these AE participants about community impacts were compared with the judgments of residents motivated to attend a forum out of personal interest, who were designated as self-selected ("SS") participants. While the magnitude of impacts rated by SS participants across all communities differed significantly from AE participants' ratings, in-depth analysis of results from two community case studies found that both AE and SS participants identified a large and diverse set of unique impacts, as well as many of the same kinds of impacts. Thus, inclusion of both kinds of residents resulted in a greater range of impacts for consideration in the environmental impact study. The case study results also found that the extent to which similar kinds of impacts are specified by AE and SS group members can differ by type of community. Study results caution against simplistic conclusions drawn from this approach to community-wide public participation. Nonetheless, the results affirm that deliberative methods for community-based impact assessment involving both AE and SS residents can provide a more complete picture of perceived impacts of proposed restoration activities.

  20. Results of Community Deliberation About Social Impacts of Ecological Restoration: Comparing Public Input of Self-Selected Versus Actively Engaged Community Members

    NASA Astrophysics Data System (ADS)

    Harris, Charles C.; Nielsen, Erik A.; Becker, Dennis R.; Blahna, Dale J.; McLaughlin, William J.

    2012-08-01

    Participatory processes for obtaining residents' input about community impacts of proposed environmental management actions have long raised concerns about who participates in public involvement efforts and whose interests they represent. This study explored methods of broad-based involvement and the role of deliberation in social impact assessment. Interactive community forums were conducted in 27 communities to solicit public input on proposed alternatives for recovering wild salmon in the Pacific Northwest US. Individuals identified by fellow residents as most active and involved in community affairs ("AE residents") were invited to participate in deliberations about likely social impacts of proposed engineering and ecological actions such as dam removal. Judgments of these AE participants about community impacts were compared with the judgments of residents motivated to attend a forum out of personal interest, who were designated as self-selected ("SS") participants. While the magnitude of impacts rated by SS participants across all communities differed significantly from AE participants' ratings, in-depth analysis of results from two community case studies found that both AE and SS participants identified a large and diverse set of unique impacts, as well as many of the same kinds of impacts. Thus, inclusion of both kinds of residents resulted in a greater range of impacts for consideration in the environmental impact study. The case study results also found that the extent to which similar kinds of impacts are specified by AE and SS group members can differ by type of community. Study results caution against simplistic conclusions drawn from this approach to community-wide public participation. Nonetheless, the results affirm that deliberative methods for community-based impact assessment involving both AE and SS residents can provide a more complete picture of perceived impacts of proposed restoration activities.

  1. 45 CFR 46.110 - Expedited review procedures for certain kinds of research involving no more than minimal risk...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... review procedure shall adopt a method for keeping all members advised of research proposals which have... research involving no more than minimal risk, and for minor changes in approved research. 46.110 Section 46... SUBJECTS Basic HHS Policy for Protection of Human Research Subjects § 46.110 Expedited review procedures...

  2. 10 CFR 745.110 - Expedited review procedures for certain kinds of research involving no more than minimal risk...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... adopt a method for keeping all members advised of research proposals which have been approved under the... 10 Energy 4 2011-01-01 2011-01-01 false Expedited review procedures for certain kinds of research involving no more than minimal risk, and for minor changes in approved research. 745.110 Section 745.110...

  3. 10 CFR 745.110 - Expedited review procedures for certain kinds of research involving no more than minimal risk...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... adopt a method for keeping all members advised of research proposals which have been approved under the... 10 Energy 4 2010-01-01 2010-01-01 false Expedited review procedures for certain kinds of research involving no more than minimal risk, and for minor changes in approved research. 745.110 Section 745.110...

  4. Results of community deliberation about social impacts of ecological restoration: comparing public input of self-selected versus actively engaged community members

    Treesearch

    Charles C. Harris; Erik A. Nielsen; Dennis R. Becker; Dale J. Blahna; William J. McLaughlin

    2012-01-01

    Participatory processes for obtaining residents' input about community impacts of proposed environmental management actions have long raised concerns about who participates in public involvement efforts and whose interests they represent. This study explored methods of broad-based involvement and the role of deliberation in social impact assessment. Interactive...

  5. Multiple-3D-object secure information system based on phase shifting method and single interference.

    PubMed

    Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam

    2016-05-20

    We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.

  6. Information verification and encryption based on phase retrieval with sparsity constraints and optical inference

    NASA Astrophysics Data System (ADS)

    Zhong, Shenlu; Li, Mengjiao; Tang, Xiajie; He, Weiqing; Wang, Xiaogang

    2017-01-01

    A novel optical information verification and encryption method is proposed based on inference principle and phase retrieval with sparsity constraints. In this method, a target image is encrypted into two phase-only masks (POMs), which comprise sparse phase data used for verification. Both of the two POMs need to be authenticated before being applied for decrypting. The target image can be optically reconstructed when the two authenticated POMs are Fourier transformed and convolved by the correct decryption key, which is also generated in encryption process. No holographic scheme is involved in the proposed optical verification and encryption system and there is also no problem of information disclosure in the two authenticable POMs. Numerical simulation results demonstrate the validity and good performance of this new proposed method.

  7. An implicit boundary integral method for computing electric potential of macromolecules in solvent

    NASA Astrophysics Data System (ADS)

    Zhong, Yimin; Ren, Kui; Tsai, Richard

    2018-04-01

    A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.

  8. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  9. Aveiro method in reproducing kernel Hilbert spaces under complete dictionary

    NASA Astrophysics Data System (ADS)

    Mai, Weixiong; Qian, Tao

    2017-12-01

    Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.

  10. Possibilities of Particle Finite Element Methods in Industrial Forming Processes

    NASA Astrophysics Data System (ADS)

    Oliver, J.; Cante, J. C.; Weyler, R.; Hernandez, J.

    2007-04-01

    The work investigates the possibilities offered by the particle finite element method (PFEM) in the simulation of forming problems involving large deformations, multiple contacts, and new boundaries generation. The description of the most distinguishing aspects of the PFEM, and its application to simulation of representative forming processes, illustrate the proposed methodology.

  11. Research Knowledge Transfer through Business-Driven Student Assignment

    ERIC Educational Resources Information Center

    Sas, Corina

    2009-01-01

    Purpose: The purpose of this paper is to present a knowledge transfer method that capitalizes on both research and teaching dimensions of academic work. It also aims to propose a framework for evaluating the impact of such a method on the involved stakeholders. Design/methodology/approach: The case study outlines and evaluates the six-stage…

  12. A nonparametric smoothing method for assessing GEE models with longitudinal binary data.

    PubMed

    Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu

    2008-09-30

    Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.

  13. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    PubMed

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  14. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  15. Run-Curve Design for Energy Saving Operation in a Modern DC-Electrification

    NASA Astrophysics Data System (ADS)

    Koseki, Takafumi; Noda, Takashi

    Mechanical brakes are often used by electric trains. These brakes have a few problems like response speed, coefficient of friction, maintenance cost and so on. As a result, methods for actively using regenerative brakes are required. In this paper, we propose the useful pure electric braking, which would involve ordinary brakes by only regenerative brakes without any mechanical brakes at high speed. Benefits of our proposal include a DC-electrification system with regenerative substations that can return powers to the commercial power system and a train that can use the full regenerative braking force. We furthermore evaluate the effects on running time and energies saved by regenerative substations in the proposed method.

  16. Laser-induced plasma characterization through self-absorption quantification

    NASA Astrophysics Data System (ADS)

    Hou, JiaJia; Zhang, Lei; Zhao, Yang; Yan, Xingyu; Ma, Weiguang; Dong, Lei; Yin, Wangbao; Xiao, Liantuan; Jia, Suotang

    2018-07-01

    A self-absorption quantification method is proposed to quantify the self-absorption degree of spectral lines, in which plasma characteristics including electron temperature, elemental concentration ratio, and absolute species number density can be deduced directly. Since there is no spectral intensity involved in the calculation, the analysis results are independent of the self-absorption effects and the additional spectral efficiency calibration is not required. In order to evaluate the practicality, the limitation for application and the precision of this method are also discussed. Experimental results of aluminum-lithium alloy prove that the proposed method is qualified to realize semi-quantitative measurements and fast plasma characteristics diagnostics.

  17. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions: FULLY COUPLED PARALLEL SIMULATION OF HYDRAULIC FRACTURES IN 3-D

    DOE PAGES

    Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...

    2016-09-18

    This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.

  18. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions: FULLY COUPLED PARALLEL SIMULATION OF HYDRAULIC FRACTURES IN 3-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.

    This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.

  19. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    PubMed

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  20. A Spacecraft Electrical Characteristics Multi-Label Classification Method Based on Off-Line FCM Clustering and On-Line WPSVM

    PubMed Central

    Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi

    2015-01-01

    This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549

  1. Source separation of municipal solid waste: The effects of different separation methods and citizens' inclination-case study of Changsha, China.

    PubMed

    Chen, Haibin; Yang, Yan; Jiang, Wei; Song, Mengjie; Wang, Ying; Xiang, Tiantian

    2017-02-01

    A case study on the source separation of municipal solid waste (MSW) was performed in Changsha, the capital city of Hunan Province, China. The objective of this study is to analyze the effects of different separation methods and compare their effects with citizens' attitudes and inclination. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. A large-scale questionnaire survey was conducted to determine citizens' attitudes and inclination toward source separation. Survey result shows that the vast majority of respondents hold consciously positive attitudes toward participation in source separation. Moreover, the respondents ignore the operability of separation methods and would rather choose the complex separation method involving four or more subclassed categories. For the effects of separation methods, the site experiment result demonstrates that the relatively simple separation method involving two categories (food waste and other waste) achieves the best effect with the highest accuracy rate (83.1%) and the lowest miscellany rate (16.9%) among the proposed experimental alternatives. The outcome reflects the inconsistency between people's environmental awareness and behavior. Such inconsistency and conflict may be attributed to the lack of environmental knowledge. Environmental education is assumed to be a fundamental solution to improve the effect of source separation of MSW in Changsha. Important management tips on source separation, including the reformation of the current pay-as-you-throw (PAYT) system, are presented in this work. A case study on the source separation of municipal solid waste was performed in Changsha. An effect evaluation method based on accuracy rate and miscellany rate was proposed to study the performance of different separation methods. The site experiment result demonstrates that the two-category (food waste and other waste) method achieves the best effect. The inconsistency between people's inclination and the effect of source separation exists. The proposed method can be expanded to other cities to determine the most effective separation method during planning stages or to evaluate the performance of running source separation systems.

  2. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  3. Jacobian-free approximate solvers for hyperbolic systems: Application to relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Castro, Manuel J.; Gallardo, José M.; Marquina, Antonio

    2017-10-01

    We present recent advances in PVM (Polynomial Viscosity Matrix) methods based on internal approximations to the absolute value function, and compare them with Chebyshev-based PVM solvers. These solvers only require a bound on the maximum wave speed, so no spectral decomposition is needed. Another important feature of the proposed methods is that they are suitable to be written in Jacobian-free form, in which only evaluations of the physical flux are used. This is particularly interesting when considering systems for which the Jacobians involve complex expressions, e.g., the relativistic magnetohydrodynamics (RMHD) equations. On the other hand, the proposed Jacobian-free solvers have also been extended to the case of approximate DOT (Dumbser-Osher-Toro) methods, which can be regarded as simple and efficient approximations to the classical Osher-Solomon method, sharing most of it interesting features and being applicable to general hyperbolic systems. To test the properties of our schemes a number of numerical experiments involving the RMHD equations are presented, both in one and two dimensions. The obtained results are in good agreement with those found in the literature and show that our schemes are robust and accurate, running stable under a satisfactory time step restriction. It is worth emphasizing that, although this work focuses on RMHD, the proposed schemes are suitable to be applied to general hyperbolic systems.

  4. Contemporary Topics in Science

    ERIC Educational Resources Information Center

    Aronstein, Laurence W.; Beam, Kathryn J.

    1974-01-01

    Discusses the offering of a Science for Poets course at the General Science Department of State University College at Buffalo, involving objectives, methods, and grouping techniques. Included are lists of problems proposed by teachers and students in the course. (CC)

  5. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    PubMed Central

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  6. Equivalent Circuit Parameter Calculation of Interior Permanent Magnet Motor Involving Iron Loss Resistance Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yamazaki, Katsumi

    In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.

  7. Improved Line Tracing Methods for Removal of Bad Streaks Noise in CCD Line Array Image—A Case Study with GF-1 Images

    PubMed Central

    Wang, Bo; Bao, Jianwei; Wang, Shikui; Wang, Houjun; Sheng, Qinghong

    2017-01-01

    Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. PMID:28441754

  8. A Test Method for Monitoring Modulus Changes during Durability Tests on Building Joint Sealants

    Treesearch

    Christopher C. White; Donald L. Hunston; Kar Tean Tan; Gregory T. Schueneman

    2012-01-01

    The durability of building joint sealants is generally assessed using a descriptive methodology involving visual inspection of exposed specimens for defects. It is widely known that this methodology has inherent limitations, including that the results are qualitative. A new test method is proposed that provides more fundamental and quantitative information about...

  9. The Long Term Effectiveness of Intensive Stuttering Therapy: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Irani, Farzan; Gabel, Rodney; Daniels, Derek; Hughes, Stephanie

    2012-01-01

    Purpose: The purpose of this study was to gain a deeper understanding of client perceptions of an intensive stuttering therapy program that utilizes a multi-faceted approach to therapy. The study also proposed to gain a deeper understanding about the process involved in long-term maintenance of meaningful changes made in therapy. Methods: The…

  10. Telecommunications Policy Research Conference. Alternatives to Rate of Return Regulation Section. Papers.

    ERIC Educational Resources Information Center

    Telecommunications Policy Research Conference, Inc., Washington, DC.

    The first of two papers presented in this section, "Price-Caps: Theory and Implementation" (Peter B. Linhart and Roy Radner) describes a proposed method of regulation involving price caps on core services and no price regulation of other services. This method is designed to replace rate-of-return regulation during a transition period to…

  11. Pentadiagonal alternating-direction-implicit finite-difference time-domain method for two-dimensional Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Tay, Wei Choon; Tan, Eng Leong

    2014-07-01

    In this paper, we have proposed a pentadiagonal alternating-direction-implicit (Penta-ADI) finite-difference time-domain (FDTD) method for the two-dimensional Schrödinger equation. Through the separation of complex wave function into real and imaginary parts, a pentadiagonal system of equations for the ADI method is obtained, which results in our Penta-ADI method. The Penta-ADI method is further simplified into pentadiagonal fundamental ADI (Penta-FADI) method, which has matrix-operator-free right-hand-sides (RHS), leading to the simplest and most concise update equations. As the Penta-FADI method involves five stencils in the left-hand-sides (LHS) of the pentadiagonal update equations, special treatments that are required for the implementation of the Dirichlet's boundary conditions will be discussed. Using the Penta-FADI method, a significantly higher efficiency gain can be achieved over the conventional Tri-ADI method, which involves a tridiagonal system of equations.

  12. Evaluation of long carbon fiber reinforced concrete to mitigate earthquake damage of infrastructure components.

    DOT National Transportation Integrated Search

    2013-06-01

    The proposed study involves investigating long carbon fiber reinforced concrete as a method of mitigating earthquake damage to : bridges and other infrastructure components. Long carbon fiber reinforced concrete has demonstrated significant resistanc...

  13. A Big Empty Space

    ERIC Educational Resources Information Center

    Blake, Anthony; Francis, David

    1973-01-01

    Approaches to developing management ability include systematic techniques, mental enlargement, self-analysis, and job-related counseling. A method is proposed to integrate them into a responsive program involving depth understanding, vision of the future, specialization commitment to change, and self-monitoring control. (MS)

  14. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  15. A simple method for determining stress intensity factors for a crack in bi-material interface

    NASA Astrophysics Data System (ADS)

    Morioka, Yuta

    Because of violently oscillating nature of stress and displacement fields near the crack tip, it is difficult to obtain stress intensity factors for a crack between two dis-similar media. For a crack in a homogeneous medium, it is a common practice to find stress intensity factors through strain energy release rates. However, individual strain energy release rates do not exist for bi-material interface crack. Hence it is necessary to find alternative methods to evaluate stress intensity factors. Several methods have been proposed in the past. However they involve mathematical complexity and sometimes require additional finite element analysis. The purpose of this research is to develop a simple method to find stress intensity factors in bi-material interface cracks. A finite element based projection method is proposed in the research. It is shown that the projection method yields very accurate stress intensity factors for a crack in isotropic and anisotropic bi-material interfaces. The projection method is also compared to displacement ratio method and energy method proposed by other authors. Through comparison it is found that projection method is much simpler to apply with its accuracy comparable to that of displacement ratio method.

  16. Oxygen radicals as key mediators in neurological disease: fact or fiction?

    PubMed

    Halliwell, B

    1992-01-01

    A free radical is any species capable of independent existence that contains one or more unpaired electrons. Free radicals and other reactive oxygen species are frequently proposed to be involved in the pathology of several neurological disorders. Criteria for establishing such involvement are presented. Development of new methods for measuring oxidative damage should enable elucidation of the precise role of reactive oxygen species in neurological disorders.

  17. A fast pulse design for parallel excitation with gridding conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2013-01-01

    Parallel excitation (pTx) is recognized as a crucial technique in high field MRI to address the transmit field inhomogeneity problem. However, it can be time consuming to design pTx pulses which is not desirable. In this work, we propose a pulse design with gridding conjugate gradient (CG) based on the small-tip-angle approximation. The two major time consuming matrix-vector multiplications are substituted by two operators which involves with FFT and gridding only. Simulation results have shown that the proposed method is 3 times faster than conventional method and the memory cost is reduced by 1000 times.

  18. Operations concepts for Mars missions with multiple mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.

    1993-01-01

    Missions are being proposed which involve landing a varying number (anywhere from one to 24) of small mobile spacecraft on Mars. Mission proposals include sample returns, in situ geochemistry and geology, and instrument deployment functions. This paper discusses changes needed in traditional space operations methods for support of rover operations. Relevant differences include more frequent commanding, higher risk acceptance, streamlined procedures, and reliance on additional spacecraft autonomy, advanced fault protection, and prenegotiated decisions. New methods are especially important for missions with several Mars rovers operating concurrently against time limits. This paper also discusses likely mission design limits imposed by operations constraints .

  19. X-ray based extensometry

    NASA Technical Reports Server (NTRS)

    Jordan, E. H.; Pease, D. M.

    1988-01-01

    A totally new method of extensometry using an X-ray beam was proposed. The intent of the method is to provide a non-contacting technique that is immune to problems associated with density variations in gaseous environments that plague optical methods. X-rays are virtually unrefractable even by solids. The new method utilizes X-ray induced X-ray fluorescence or X-ray induced optical fluorescence of targets that have melting temperatures of over 3000 F. Many different variations of the basic approaches are possible. In the year completed, preliminary experiments were completed which strongly suggest that the method is feasible. The X-ray induced optical fluorescence method appears to be limited to temperatures below roughly 1600 F because of the overwhelming thermal optical radiation. The X-ray induced X-ray fluorescence scheme appears feasible up to very high temperatures. In this system there will be an unknown tradeoff between frequency response, cost, and accuracy. The exact tradeoff can only be estimated. It appears that for thermomechanical tests with cycle times on the order of minutes a very reasonable system may be feasible. The intended applications involve very high temperatures in both materials testing and monitoring component testing. Gas turbine engines, rocket engines, and hypersonic vehicles (NASP) all involve measurement needs that could partially be met by the proposed technology.

  20. A new method for mapping multidimensional data to lower dimensions

    NASA Technical Reports Server (NTRS)

    Gowda, K. C.

    1983-01-01

    A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.

  1. Initialization Method for Grammar-Guided Genetic Programming

    NASA Astrophysics Data System (ADS)

    García-Arnau, M.; Manrique, D.; Ríos, J.; Rodríguez-Patón, A.

    This paper proposes a new tree-generation algorithm for grammarguided genetic programming that includes a parameter to control the maximum size of the trees to be generated. An important feature of this algorithm is that the initial populations generated are adequately distributed in terms of tree size and distribution within the search space. Consequently, genetic programming systems starting from the initial populations generated by the proposed method have a higher convergence speed. Two different problems have been chosen to carry out the experiments: a laboratory test involving searching for arithmetical equalities and the real-world task of breast cancer prognosis. In both problems, comparisons have been made to another five important initialization methods.

  2. Proposed Lymph Node Staging System Using the International Consensus Guidelines for Lymph Node Levels Is Predictive for Nasopharyngeal Carcinoma Patients From Endemic Areas Treated With Intensity Modulated Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wen-Fei; Sun, Ying; Mao, Yan-Ping

    2013-06-01

    Purpose: To propose a lymph node (N) staging system for nasopharyngeal carcinoma (NPC) based on the International Consensus Guidelines for lymph node (LN) levels and MRI-determined nodal variables. Methods and Materials: The MRI scans and medical records of 749 NPC patients receiving intensity modulated radiation therapy with or without chemotherapy were retrospectively reviewed. The prognostic significance of nodal level, laterality, maximal axial diameter, extracapsular spread, necrosis, and Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC) size criteria were analyzed. Results: Nodal level and laterality were the only independent prognostic factors for distant failure and disease failure in multivariatemore » analysis. Compared with unilateral levels Ib, II, III, and/or Va involvement (hazard ratio [HR] 1), retropharyngeal lymph node involvement alone had a similar prognostic value (HR 0.71; 95% confidence interval [CI] 0.43-1.17; P=.17), whereas bilateral levels Ib, II, III, and/or Va involvement (HR 1.65; 95% CI 1.06-2.58; P=.03) and levels IV, Vb, and/or supraclavicular fossa involvement (HR 3.47; 95% CI 1.92-6.29; P<.01) both significantly increased the HR for distant failure. Thus we propose that the N category criteria could be revised as follows: N0, no regional LN metastasis; N1, retropharyngeal lymph node involvement, and/or unilateral levels Ib, II, III, and/or Va involvement; N2, bilateral levels Ib, II, III, and/or Va involvement; N3, levels IV, Vb, and/or supraclavicular fossa involvement. Compared with the 7th edition of the UICC/AJCC criteria, the proposed N staging system provides a more satisfactory distinction between the HRs for regional failure, distant failure, and disease failure in each N category. Conclusions: The proposed N staging system defined by the International Consensus Guidelines and laterality is predictive and practical. However, because of no measurements of the maximal nodal diameter on MRI slices, the prognostic significance of LN size needs further evaluation.« less

  3. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  4. Locating Structural Centers: A Density-Based Clustering Method for Community Detection

    PubMed Central

    Liu, Gongshen; Li, Jianhua; Nees, Jan P.

    2017-01-01

    Uncovering underlying community structures in complex networks has received considerable attention because of its importance in understanding structural attributes and group characteristics of networks. The algorithmic identification of such structures is a significant challenge. Local expanding methods have proven to be efficient and effective in community detection, but most methods are sensitive to initial seeds and built-in parameters. In this paper, we present a local expansion method by density-based clustering, which aims to uncover the intrinsic network communities by locating the structural centers of communities based on a proposed structural centrality. The structural centrality takes into account local density of nodes and relative distance between nodes. The proposed algorithm expands a community from the structural center to the border with a single local search procedure. The local expanding procedure follows a heuristic strategy as allowing it to find complete community structures. Moreover, it can identify different node roles (cores and outliers) in communities by defining a border region. The experiments involve both on real-world and artificial networks, and give a comparison view to evaluate the proposed method. The result of these experiments shows that the proposed method performs more efficiently with a comparative clustering performance than current state of the art methods. PMID:28046030

  5. Involving seldom-heard groups in a PPI process to inform the design of a proposed trial on the use of probiotics to prevent preterm birth: a case study.

    PubMed

    Rayment, Juliet; Lanlehin, Rosemary; McCourt, Christine; Husain, Shahid M

    2017-01-01

    When designing clinical trials it is important to involve members of the public, who can provide a view on what may encourage or prevent people participating and on what matters to them. This is known as Public and Patient Involvement (PPI). People from minority ethnic groups are often less likely to take part in clinical trials, but it is important to ensure they are able to participate fully so that health research and its findings are relevant to a wide population. We are preparing to conduct a randomised controlled trial (RCT) to test whether taking probiotic capsules can play a role in preventing preterm birth. Women from some minority ethnic groups, for example women from West Africa, and those who are from low-income groups are more likely to suffer preterm births. Preterm birth can lead to extra costs to health services and psychosocial costs for families. In this article we describe how we engaged women in discussion about the design of the planned trial, and how we aim to use our findings to ensure the trial is workable and beneficial to women, as well as to further engage service users in the future development of the trial. Four socially and ethnically diverse groups of women in East London took part in discussions about the trial and contributed their ideas and concerns. These discussions have helped to inform and improve the design of a small practice or 'pilot' trial to test the recruitment in a 'real life' setting, as well as encourage further PPI involvement for the future full-scale trial. Background Patient and public involvement (PPI) is an important tool in approaching research challenges. However, involvement of socially and ethnically diverse populations remains limited and practitioners need effective methods of involving a broad section of the population in planning and designing research. Methods In preparation for the development of a pilot randomised controlled trial (RCT) on the use of probiotics to prevent preterm birth, we conducted a public consultation exercise in a socially disadvantaged and ethnically diverse community. The consultation aimed to meet and engage local service users in considering the acceptability of the proposed protocol, and to encourage their participation in future and ongoing patient and public involvement activities. Four discussion groups were held in the community with mothers of young children within the proposed trial region, using an inclusive approach that incorporated a modified version of the Nominal Group Technique (NGT). Bringing the consultation to the community supported the involvement of often seldom-heard participants, such as those from minority ethnic groups. Results The women involved expressed a number of concerns about the proposed protocol, including adherence to the probiotic supplement regimen and randomisation. The proposal for the RCT in itself was perceived as confirmation that probiotic supplements had potentially beneficial effects, but also that they had potentially harmful side-effects. The complexity of the women's responses provided greater insights into the challenges of even quite simple trial designs and enabled the research team to take these concerns into account while planning the pilot trial. Conclusions The use of the NGT method allowed for a consultation of a population traditionally less likely to participate in medical research. A carefully facilitated PPI exercise can allow members to express unanticipated concerns that may not have been elicited by a survey method. Findings from such exercises can be utilised to improve clinical trial design, provide insight into the feasibility of trials, and enable engagement of often excluded population groups.

  6. An algorithm to track laboratory zebrafish shoals.

    PubMed

    Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia

    2018-05-01

    In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Recognition and defect detection of dot-matrix text via variation-model based learning

    NASA Astrophysics Data System (ADS)

    Ohyama, Wataru; Suzuki, Koushi; Wakabayashi, Tetsushi

    2017-03-01

    An algorithm for recognition and defect detection of dot-matrix text printed on products is proposed. Extraction and recognition of dot-matrix text contains several difficulties, which are not involved in standard camera-based OCR, that the appearance of dot-matrix characters is corrupted and broken by illumination, complex texture in the background and other standard characters printed on product packages. We propose a dot-matrix text extraction and recognition method which does not require any user interaction. The method employs detected location of corner points and classification score. The result of evaluation experiment using 250 images shows that recall and precision of extraction are 78.60% and 76.03%, respectively. Recognition accuracy of correctly extracted characters is 94.43%. Detecting printing defect of dot-matrix text is also important in the production scene to avoid illegal productions. We also propose a detection method for printing defect of dot-matrix characters. The method constructs a feature vector of which elements are classification scores of each character class and employs support vector machine to classify four types of printing defect. The detection accuracy of the proposed method is 96.68 %.

  8. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.

    PubMed

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.

  9. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    PubMed Central

    Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  10. Applications of an automated stem measurer for precision forestry

    Treesearch

    N. Clark

    2001-01-01

    Accurate stem measurements are required for the determination of many silvicultural prescriptions, i.e., what are we going to do with a stand of trees. This would only be amplified in a precision forestry context. Many methods have been proposed for optimal ways to evaluate stems for a variety of characteristics. These methods usually involve the acquisition of total...

  11. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    NASA Astrophysics Data System (ADS)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  12. Modeling Complex Dynamic Interactions of Nonlinear, Aeroelastic, Multistage, and Localization Phenomena in Turbine Engines

    DTIC Science & Technology

    2011-02-25

    fast method of predicting the number of iterations needed for converged results. A new hybrid technique is proposed to predict the convergence history...interchanging between the modes, whereas a smaller veering (or crossing) region shows fast mode switching. Then, the nonlinear vibration re- sponse of the...problems of interest involve dynamic ( fast ) crack propagation, then the nodes selected by the proposed approach at some time instant might not

  13. Analysis of the dynamic behavior of structures using the high-rate GNSS-PPP method combined with a wavelet-neural model: Numerical simulation and experimental tests

    NASA Astrophysics Data System (ADS)

    Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.

    2018-03-01

    Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.

  14. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  15. Spectral iterative method and convergence analysis for solving nonlinear fractional differential equation

    NASA Astrophysics Data System (ADS)

    Yarmohammadi, M.; Javadi, S.; Babolian, E.

    2018-04-01

    In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.

  16. Genetic shifting: a novel approach for controlling vector-borne diseases.

    PubMed

    Powell, Jeffrey R; Tabachnick, Walter J

    2014-06-01

    Rendering populations of vectors of diseases incapable of transmitting pathogens through genetic methods has long been a goal of vector geneticists. We outline a method to achieve this goal that does not involve the introduction of any new genetic variants to the target population. Rather we propose that shifting the frequencies of naturally occurring alleles that confer refractoriness to transmission can reduce transmission below a sustainable level. The program employs methods successfully used in plant and animal breeding. Because no artificially constructed genetically modified organisms (GMOs) are introduced into the environment, the method is minimally controversial. We use Aedes aegypti and dengue virus (DENV) for illustrative purposes but point out that the proposed program is generally applicable to vector-borne disease control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  18. A hybrid clustering approach for multivariate time series - A case study applied to failure analysis in a gas turbine.

    PubMed

    Fontes, Cristiano Hora; Budman, Hector

    2017-11-01

    A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  20. Direct chiral determination of free amino acid enantiomers by two-dimensional liquid chromatography: application to control transformations in E-beam irradiated foodstuffs.

    PubMed

    Guillén-Casla, Vanesa; León-González, María Eugenia; Pérez-Arribas, Luis Vicente; Polo-Díez, Luis María

    2010-05-01

    Changes in free amino acids content and its potential racemization in ready-to-eat foods treated with E-beam irradiation between 1 and 8 kGy for sanitation purposes were studied. A simple heart cut two-dimensional high performance liquid chromatographic method (LC-LC) for the simultaneous enantiomeric determination of three pairs of amino acids used as markers (tyrosine, phenylalanine, and tryptophan) is presented. The proposed method involves the use of two chromatographs in an LC-LC achiral-chiral coupling. Amino acids and their decomposition products were firstly separated in a primary column (C(18)) using a mixture of ammonium acetate buffer (20 mM, pH 6) (94%) and methanol (6%) as the mobile phase. Then, a portion of each peak was transferred by heart cutting through a switching valve to a teicoplanin-chiral column. Methanol (90%)/water (10%) was used as the mobile phase. Ultraviolet detection was at 260 nm. Detection limits were between 0.16 and 3 mg L(-1) for each enantiomer. Recoveries were in the range 79-98%. The LC-LC method combined with the proposed sample extraction procedure is suitable for complex samples; it involves an online cleanup, and it prevents degradation of protein, racemization of L-enantiomers, and degradation of tryptophan. Under these conditions, D-amino acids were not found in any of the analyzed samples at detection levels of the proposed method.

  1. Conversion of a Rhotrix to a "Coupled Matrix"

    ERIC Educational Resources Information Center

    Sani, B.

    2008-01-01

    In this note, a method of converting a rhotrix to a special form of matrix termed a "coupled matrix" is proposed. The special matrix can be used to solve various problems involving n x n and (n - 1) x (n - 1) matrices simultaneously.

  2. Object extraction method for image synthesis

    NASA Astrophysics Data System (ADS)

    Inoue, Seiki

    1991-11-01

    The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.

  3. Efficient least angle regression for identification of linear-in-the-parameters models

    PubMed Central

    Beach, Thomas H.; Rezgui, Yacine

    2017-01-01

    Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140

  4. Alzheimer Classification Using a Minimum Spanning Tree of High-Order Functional Network on fMRI Dataset

    PubMed Central

    Guo, Hao; Liu, Lei; Chen, Junjie; Xu, Yong; Jie, Xiang

    2017-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most useful methods to generate functional connectivity networks of the brain. However, conventional network generation methods ignore dynamic changes of functional connectivity between brain regions. Previous studies proposed constructing high-order functional connectivity networks that consider the time-varying characteristics of functional connectivity, and a clustering method was performed to decrease computational cost. However, random selection of the initial clustering centers and the number of clusters negatively affected classification accuracy, and the network lost neurological interpretability. Here we propose a novel method that introduces the minimum spanning tree method to high-order functional connectivity networks. As an unbiased method, the minimum spanning tree simplifies high-order network structure while preserving its core framework. The dynamic characteristics of time series are not lost with this approach, and the neurological interpretation of the network is guaranteed. Simultaneously, we propose a multi-parameter optimization framework that involves extracting discriminative features from the minimum spanning tree high-order functional connectivity networks. Compared with the conventional methods, our resting-state fMRI classification method based on minimum spanning tree high-order functional connectivity networks greatly improved the diagnostic accuracy for Alzheimer's disease. PMID:29249926

  5. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    USGS Publications Warehouse

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  6. Extinction-ratio-independent electrical method for measuring chirp parameters of Mach-Zehnder modulators using frequency-shifted heterodyne.

    PubMed

    Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-06-15

    An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.

  7. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    NASA Astrophysics Data System (ADS)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  8. AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159

  9. Food parenting measurement issues: working group consensus report.

    PubMed

    Hughes, Sheryl O; Frankel, Leslie A; Beltran, Alicia; Hodges, Eric; Hoerr, Sharon; Lumeng, Julie; Tovar, Alison; Kremers, Stef

    2013-08-01

    Childhood obesity is a growing problem. As more researchers become involved in the study of parenting influences on childhood obesity, there appears to be a lack of agreement regarding the most important parenting constructs of interest, definitions of those constructs, and measurement of those constructs in a consistent manner across studies. This article aims to summarize findings from a working group that convened specifically to discuss measurement issues related to parental influences on childhood obesity. Six subgroups were formed to address key measurement issues. The conceptualization subgroup proposed to define and distinguish constructs of general parenting styles, feeding styles, and food parenting practices with the goal of understanding interrelating levels of parental influence on child eating behaviors. The observational subgroup identified the need to map constructs for use in coding direct observations and create observational measures that can capture the bidirectional effects of parent-child interactions. The self-regulation subgroup proposed an operational definition of child self-regulation of energy intake and suggested future measures of self-regulation across different stages of development. The translational/community involvement subgroup proposed the involvement of community in the development of surveys so that measures adequately reflect cultural understanding and practices of the community. The qualitative methods subgroup proposed qualitative methods as a way to better understand the breadth of food parenting practices and motivations for the use of such practices. The longitudinal subgroup stressed the importance of food parenting measures sensitive to change for use in longitudinal studies. In the creation of new measures, it is important to consider cultural sensitivity and context-specific food parenting domains. Moderating variables such as child temperament and child food preferences should be considered in models.

  10. Food Parenting Measurement Issues: Working Group Consensus Report

    PubMed Central

    Frankel, Leslie A.; Beltran, Alicia; Hodges, Eric; Hoerr, Sharon; Lumeng, Julie; Tovar, Alison; Kremers, Stef

    2013-01-01

    Abstract Childhood obesity is a growing problem. As more researchers become involved in the study of parenting influences on childhood obesity, there appears to be a lack of agreement regarding the most important parenting constructs of interest, definitions of those constructs, and measurement of those constructs in a consistent manner across studies. This article aims to summarize findings from a working group that convened specifically to discuss measurement issues related to parental influences on childhood obesity. Six subgroups were formed to address key measurement issues. The conceptualization subgroup proposed to define and distinguish constructs of general parenting styles, feeding styles, and food parenting practices with the goal of understanding interrelating levels of parental influence on child eating behaviors. The observational subgroup identified the need to map constructs for use in coding direct observations and create observational measures that can capture the bidirectional effects of parent–child interactions. The self-regulation subgroup proposed an operational definition of child self-regulation of energy intake and suggested future measures of self-regulation across different stages of development. The translational/community involvement subgroup proposed the involvement of community in the development of surveys so that measures adequately reflect cultural understanding and practices of the community. The qualitative methods subgroup proposed qualitative methods as a way to better understand the breadth of food parenting practices and motivations for the use of such practices. The longitudinal subgroup stressed the importance of food parenting measures sensitive to change for use in longitudinal studies. In the creation of new measures, it is important to consider cultural sensitivity and context-specific food parenting domains. Moderating variables such as child temperament and child food preferences should be considered in models. PMID:23944928

  11. Detrended fluctuation analysis for major depressive disorder.

    PubMed

    Mumtaz, Wajid; Malik, Aamir Saeed; Ali, Syed Saad Azhar; Yasin, Mohd Azhar Mohd; Amin, Hafeezullah

    2015-01-01

    Clinical utility of Electroencephalography (EEG) based diagnostic studies is less clear for major depressive disorder (MDD). In this paper, a novel machine learning (ML) scheme was presented to discriminate the MDD patients and healthy controls. The proposed method inherently involved feature extraction, selection, classification and validation. The EEG data acquisition involved eyes closed (EC) and eyes open (EO) conditions. At feature extraction stage, the de-trended fluctuation analysis (DFA) was performed, based on the EEG data, to achieve scaling exponents. The DFA was performed to analyzes the presence or absence of long-range temporal correlations (LRTC) in the recorded EEG data. The scaling exponents were used as input features to our proposed system. At feature selection stage, 3 different techniques were used for comparison purposes. Logistic regression (LR) classifier was employed. The method was validated by a 10-fold cross-validation. As results, we have observed that the effect of 3 different reference montages on the computed features. The proposed method employed 3 different types of feature selection techniques for comparison purposes as well. The results show that the DFA analysis performed better in LE data compared with the IR and AR data. In addition, during Wilcoxon ranking, the AR performed better than LE and IR. Based on the results, it was concluded that the DFA provided useful information to discriminate the MDD patients and with further validation can be employed in clinics for diagnosis of MDD.

  12. An efficient numerical algorithm for transverse impact problems

    NASA Technical Reports Server (NTRS)

    Sankar, B. V.; Sun, C. T.

    1985-01-01

    Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.

  13. Simple Backdoors on RSA Modulus by Using RSA Vulnerability

    NASA Astrophysics Data System (ADS)

    Sun, Hung-Min; Wu, Mu-En; Yang, Cheng-Ta

    This investigation proposes two methods for embedding backdoors in the RSA modulus N=pq rather than in the public exponent e. This strategy not only permits manufacturers to embed backdoors in an RSA system, but also allows users to choose any desired public exponent, such as e=216+1, to ensure efficient encryption. This work utilizes lattice attack and exhaustive attack to embed backdoors in two proposed methods, called RSASBLT and RSASBES, respectively. Both approaches involve straightforward steps, making their running time roughly the same as that of normal RSA key-generation time, implying that no one can detect the backdoor by observing time imparity.

  14. Robust vehicle detection under various environmental conditions using an infrared thermal camera and its application to road traffic flow monitoring.

    PubMed

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-06-17

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.

  15. A Discussion and Comparison of Selected Methods for Determining Cutoff Scores for Proficiency and Placement Tests. Placement and Proficiency Testing Report No. 6.

    ERIC Educational Resources Information Center

    Klein, Anna C.; Whitney, Douglas R.

    Procedures and related issues involved in the application of trait-treatment interaction (TTI) to institutional research, in general, and to placement and proficiency testing, in particular, are discussed and illustrated. Traditional methods for choosing cut-off scores are compared and proposals for evaluating the results in the TTI framework are…

  16. Standardless quantification by parameter optimization in electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-11-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.

  17. 77 FR 25438 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-30

    ... consideration. ADDRESSES: You may submit comments by any of the following methods: Electronic: CFPB_Public_PRA... agencies and the general public. Nearly all information collection will involve the use of electronic communication or other forms of information technology and telephonic means. Current Actions: Request for new...

  18. Designing Needs Statements in a Systematic Iterative Way

    ERIC Educational Resources Information Center

    Verstegen, D. M. L.; Barnard, Y. F.; Pilot, A.

    2009-01-01

    Designing specifications for technically advanced instructional products, such as e-learning, simulations or simulators requires different kinds of expertise. The SLIM method proposes to involve all stakeholders from the beginning in a series of workshops under the guidance of experienced instructional designers. These instructional designers…

  19. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  20. Environmental impact assessment for alternative-energy power plants in México.

    PubMed

    González-Avila, María E; Beltrán-Morales, Luis Felipe; Braker, Elizabeth; Ortega-Rubio, Alfredo

    2006-07-01

    Ten Environmental Impact Assessment Reports (EIAR) were reviewed for projects involving alternative power plants in Mexico developed during the last twelve years. Our analysis focused on the methods used to assess the impacts produced by hydroelectric and geothermal power projects. These methods used to assess impacts in EIARs ranged from the most simple, descriptive criteria, to quantitative models. These methods are not concordant with the level of the EIAR required by the environmental authority or even, with the kind of project developed. It is concluded that there is no correlation between the tools used to assess impacts and the assigned type of the EIAR. Because the methods to assess impacts produced by these power projects have not changed during 2000 years, we propose a quantitative method, based on ecological criteria and tools, to assess the impacts produced by hydroelectric and geothermal plants, according to the specific characteristics of the project. The proposed method is supported by environmental norms, and can assist environmental authorities in assigning the correct level and tools to be applied to hydroelectric and geothermal projects. The proposed method can be adapted to other production activities in Mexico and to other countries.

  1. Multiratio fusion change detection with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.

    2017-04-01

    A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.

  2. A new asymptotic method for jump phenomena

    NASA Technical Reports Server (NTRS)

    Reiss, E. L.

    1980-01-01

    Physical phenomena involving rapid and sudden transitions, such as snap buckling of elastic shells, explosions, and earthquakes, are characterized mathematically as a small disturbance causing a large-amplitude response. Because of this, standard asymptotic and perturbation methods are ill-suited to these problems. In the present paper, a new method of analyzing jump phenomena is proposed. The principal feature of the method is the representation of the response in terms of rational functions. For illustration, the method is applied to the snap buckling of an elastic arch and to a simple combustion problem.

  3. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align andmore » transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections.« less

  5. Adaptive Granulation-Based Prediction for Energy System of Steel Industry.

    PubMed

    Wang, Tianyu; Han, Zhongyang; Zhao, Jun; Wang, Wei

    2018-01-01

    The flow variation tendency of byproduct gas plays a crucial role for energy scheduling in steel industry. An accurate prediction of its future trends will be significantly beneficial for the economic profits of steel enterprise. In this paper, a long-term prediction model for the energy system is proposed by providing an adaptive granulation-based method that considers the production semantics involved in the fluctuation tendency of the energy data, and partitions them into a series of information granules. To fully reflect the corresponding data characteristics of the formed unequal-length temporal granules, a 3-D feature space consisting of the timespan, the amplitude and the linetype is designed as linguistic descriptors. In particular, a collaborative-conditional fuzzy clustering method is proposed to granularize the tendency-based feature descriptors and specifically measure the amplitude variation of industrial data which plays a dominant role in the feature space. To quantify the performance of the proposed method, a series of real-world industrial data coming from the energy data center of a steel plant is employed to conduct the comparative experiments. The experimental results demonstrate that the proposed method successively satisfies the requirements of the practically viable prediction.

  6. Measurement of vibration using phase only correlation technique

    NASA Astrophysics Data System (ADS)

    Balachandar, S.; Vipin, K.

    2017-08-01

    A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.

  7. Altitude Effects on Thermal Ice Protection System Performance; a Study of an Alternative Approach

    NASA Technical Reports Server (NTRS)

    Addy, Harold E., Jr.; Orchard, David; Wright, William B.; Oleskiw, Myron

    2016-01-01

    Research has been conducted to better understand the phenomena involved during operation of an aircraft's thermal ice protection system under running wet icing conditions. In such situations, supercooled water striking a thermally ice-protected surface does not fully evaporate but runs aft to a location where it freezes. The effects of altitude, in terms of air pressure and density, on the processes involved were of particular interest. Initial study results showed that the altitude effects on heat energy transfer were accurately modeled using existing methods, but water mass transport was not. Based upon those results, a new method to account for altitude effects on thermal ice protection system operation was proposed. The method employs a two-step process where heat energy and mass transport are sequentially matched, linked by matched surface temperatures. While not providing exact matching of heat and mass transport to reference conditions, the method produces a better simulation than other methods. Moreover, it does not rely on the application of empirical correction factors, but instead relies on the straightforward application of the primary physics involved. This report describes the method, shows results of testing the method, and discusses its limitations.

  8. Optimization of cell seeding in a 2D bio-scaffold system using computational models.

    PubMed

    Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong

    2017-05-01

    The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A Coding Method for Efficient Subgraph Querying on Vertex- and Edge-Labeled Graphs

    PubMed Central

    Zhu, Lei; Song, Qinbao; Guo, Yuchen; Du, Lei; Zhu, Xiaoyan; Wang, Guangtao

    2014-01-01

    Labeled graphs are widely used to model complex data in many domains, so subgraph querying has been attracting more and more attention from researchers around the world. Unfortunately, subgraph querying is very time consuming since it involves subgraph isomorphism testing that is known to be an NP-complete problem. In this paper, we propose a novel coding method for subgraph querying that is based on Laplacian spectrum and the number of walks. Our method follows the filtering-and-verification framework and works well on graph databases with frequent updates. We also propose novel two-step filtering conditions that can filter out most false positives and prove that the two-step filtering conditions satisfy the no-false-negative requirement (no dismissal in answers). Extensive experiments on both real and synthetic graphs show that, compared with six existing counterpart methods, our method can effectively improve the efficiency of subgraph querying. PMID:24853266

  10. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  11. Saving and Reproduction of Human Motion Data by Using Haptic Devices with Different Configurations

    NASA Astrophysics Data System (ADS)

    Tsunashima, Noboru; Yokokura, Yuki; Katsura, Seiichiro

    Recently, there has been increased focus on “haptic recording” development of a motion-copying system is an efficient method for the realization of haptic recording. Haptic recording involves saving and reproduction of human motion data on the basis of haptic information. To increase the number of applications of the motion-copying system in various fields, it is necessary to reproduce human motion data by using haptic devices with different configurations. In this study, a method for the above-mentioned haptic recording is developed. In this method, human motion data are saved and reproduced on the basis of work space information, which is obtained by coordinate transformation of motor space information. The validity of the proposed method is demonstrated by experiments. With the proposed method, saving and reproduction of human motion data by using various devices is achieved. Furthermore, it is also possible to use haptic recording in various fields.

  12. An efficient genome-wide association test for multivariate phenotypes based on the Fisher combination function.

    PubMed

    Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne

    2016-01-05

    In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.

  13. Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS

    PubMed Central

    Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2016-01-01

    We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332

  14. Mean Comparison: Manifest Variable versus Latent Variable

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  15. Accounting for the Benefits of Database Normalization

    ERIC Educational Resources Information Center

    Wang, Ting J.; Du, Hui; Lehmann, Constance M.

    2010-01-01

    This paper proposes a teaching approach to reinforce accounting students' understanding of the concept of database normalization. Unlike a conceptual approach shown in most of the AIS textbooks, this approach involves with calculations and reconciliations with which accounting students are familiar because the methods are frequently used in…

  16. Essential metrics for assessing sex & gender integration in health research proposals involving human participants.

    PubMed

    Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A

    2017-01-01

    Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.

  17. Essential metrics for assessing sex & gender integration in health research proposals involving human participants

    PubMed Central

    Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.

    2017-01-01

    Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192

  18. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Gait-Event-Based Synchronization Method for Gait Rehabilitation Robots via a Bioinspired Adaptive Oscillator.

    PubMed

    Chen, Gong; Qi, Peng; Guo, Zhao; Yu, Haoyong

    2017-06-01

    In the field of gait rehabilitation robotics, achieving human-robot synchronization is very important. In this paper, a novel human-robot synchronization method using gait event information is proposed. This method includes two steps. First, seven gait events in one gait cycle are detected in real time with a hidden Markov model; second, an adaptive oscillator is utilized to estimate the stride percentage of human gait using any one of the gait events. Synchronous reference trajectories for the robot are then generated with the estimated stride percentage. This method is based on a bioinspired adaptive oscillator, which is a mathematical tool, first proposed to explain the phenomenon of synchronous flashing among fireflies. The proposed synchronization method is implemented in a portable knee-ankle-foot robot and tested in 15 healthy subjects. This method has the advantages of simple structure, flexible selection of gait events, and fast adaptation. Gait event is the only information needed, and hence the performance of synchronization holds when an abnormal gait pattern is involved. The results of the experiments reveal that our approach is efficient in achieving human-robot synchronization and feasible for rehabilitation robotics application.

  20. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  1. A simultaneous multi-slice selective J-resolved experiment for fully resolved scalar coupling information

    NASA Astrophysics Data System (ADS)

    Zeng, Qing; Lin, Liangjie; Chen, Jinyong; Lin, Yanqin; Barker, Peter B.; Chen, Zhong

    2017-09-01

    Proton-proton scalar coupling plays an important role in molecular structure elucidation. Many methods have been proposed for revealing scalar coupling networks involving chosen protons. However, determining all JHH values within a fully coupled network remains as a tedious process. Here, we propose a method termed as simultaneous multi-slice selective J-resolved spectroscopy (SMS-SEJRES) for simultaneously measuring JHH values out of all coupling networks in a sample within one experiment. In this work, gradient-encoded selective refocusing, PSYCHE decoupling and echo planar spectroscopic imaging (EPSI) detection module are adopted, resulting in different selective J-edited spectra extracted from different spatial positions. The proposed pulse sequence can facilitate the analysis of molecular structures. Therefore, it will interest scientists who would like to efficiently address the structural analysis of molecules.

  2. Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.

    PubMed

    Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2012-06-01

    This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.

  3. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    PubMed

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  4. Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy

    PubMed Central

    Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca

    2014-01-01

    Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260

  5. Concept for an off-line gain stabilisation method.

    PubMed

    Pommé, S; Sibbens, G

    2004-01-01

    Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.

  6. Reducing maintenance costs in agreement with CNC machine tools reliability

    NASA Astrophysics Data System (ADS)

    Ungureanu, A. L.; Stan, G.; Butunoi, P. A.

    2016-08-01

    Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.

  7. A latent discriminative model-based approach for classification of imaginary motor tasks from EEG data.

    PubMed

    Saa, Jaime F Delgado; Çetin, Müjdat

    2012-04-01

    We consider the problem of classification of imaginary motor tasks from electroencephalography (EEG) data for brain-computer interfaces (BCIs) and propose a new approach based on hidden conditional random fields (HCRFs). HCRFs are discriminative graphical models that are attractive for this problem because they (1) exploit the temporal structure of EEG; (2) include latent variables that can be used to model different brain states in the signal; and (3) involve learned statistical models matched to the classification task, avoiding some of the limitations of generative models. Our approach involves spatial filtering of the EEG signals and estimation of power spectra based on autoregressive modeling of temporal segments of the EEG signals. Given this time-frequency representation, we select certain frequency bands that are known to be associated with execution of motor tasks. These selected features constitute the data that are fed to the HCRF, parameters of which are learned from training data. Inference algorithms on the HCRFs are used for the classification of motor tasks. We experimentally compare this approach to the best performing methods in BCI competition IV as well as a number of more recent methods and observe that our proposed method yields better classification accuracy.

  8. Implementation of an effective hybrid GA for large-scale traveling salesman problems.

    PubMed

    Nguyen, Hung Dinh; Yoshihara, Ikuo; Yamamori, Kunihito; Yasunaga, Moritoshi

    2007-02-01

    This correspondence describes a hybrid genetic algorithm (GA) to find high-quality solutions for the traveling salesman problem (TSP). The proposed method is based on a parallel implementation of a multipopulation steady-state GA involving local search heuristics. It uses a variant of the maximal preservative crossover and the double-bridge move mutation. An effective implementation of the Lin-Kernighan heuristic (LK) is incorporated into the method to compensate for the GA's lack of local search ability. The method is validated by comparing it with the LK-Helsgaun method (LKH), which is one of the most effective methods for the TSP. Experimental results with benchmarks having up to 316228 cities show that the proposed method works more effectively and efficiently than LKH when solving large-scale problems. Finally, the method is used together with the implementation of the iterated LK to find a new best tour (as of June 2, 2003) for a 1904711-city TSP challenge.

  9. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  10. Enhanced low-temperature lithium storage performance of multilayer graphene made through an improved ionic liquid-assisted synthesis

    NASA Astrophysics Data System (ADS)

    Raccichini, Rinaldo; Varzi, Alberto; Chakravadhanula, Venkata Sai Kiran; Kübel, Christian; Balducci, Andrea; Passerini, Stefano

    2015-05-01

    The electrochemical properties of graphene are strongly depending on its synthesis. Between the different methods proposed so far, liquid phase exfoliation turns out to be a promising method for the production of graphene. Unfortunately, the low yield of this technique, in term of solid material obtained, still limit its use to small scale applications. In this article we propose a low cost and environmentally friendly method for producing multilayer crystalline graphene with high yield. Such innovative approach, involving an improved ionic liquid assisted, microwave exfoliation of expanded graphite, allows the production of graphene with advanced lithium ion storage performance, for the first time, at low temperatures (<0 °C), as low as -30 °C, with respect to commercially available graphite.

  11. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D.

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less

  12. Offspring Generation Method for interactive Genetic Algorithm considering Multimodal Preference

    NASA Astrophysics Data System (ADS)

    Ito, Fuyuko; Hiroyasu, Tomoyuki; Miki, Mitsunori; Yokouchi, Hisatake

    In interactive genetic algorithms (iGAs), computer simulations prepare design candidates that are then evaluated by the user. Therefore, iGA can predict a user's preferences. Conventional iGA problems involve a search for a single optimum solution, and iGA were developed to find this single optimum. On the other hand, our target problems have several peaks in a function and there are small differences among these peaks. For such problems, it is better to show all the peaks to the user. Product recommendation in shopping sites on the web is one example of such problems. Several types of preference trend should be prepared for users in shopping sites. Exploitation and exploration are important mechanisms in GA search. To perform effective exploitation, the offspring generation method (crossover) is very important. Here, we introduced a new offspring generation method for iGA in multimodal problems. In the proposed method, individuals are clustered into subgroups and offspring are generated in each group. The proposed method was applied to an experimental iGA system to examine its effectiveness. In the experimental iGA system, users can decide on preferable t-shirts to buy. The results of the subjective experiment confirmed that the proposed method enables offspring generation with consideration of multimodal preferences, and the proposed mechanism was also shown not to adversely affect the performance of preference prediction.

  13. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Purification of photon subtraction from continuous squeezed light by filtering

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Jun-ichi; Asavanant, Warit; Furusawa, Akira

    2017-11-01

    Photon subtraction from squeezed states is a powerful scheme to create good approximation of so-called Schrödinger cat states. However, conventional continuous-wave-based methods actually involve some impurity in squeezing of localized wave packets, even in the ideal case of no optical losses. Here, we theoretically discuss this impurity by introducing mode match of squeezing. Furthermore, here we propose a method to remove this impurity by filtering the photon-subtraction field. Our method in principle enables creation of pure photon-subtracted squeezed states, which was not possible with conventional methods.

  15. Investigation of Proprioceptor Stimulation.

    ERIC Educational Resources Information Center

    Caukins, Sivan E.; And Others

    A research proposal to study the effect of multisensory teaching methods in first-grade reading is presented. The focus is on sex differences in learning and in multisensory approaches to teaching. The project will involve 10 experimental and 10 control first-grade classes in several Southern California schools. Both groups will be given IQ,…

  16. 78 FR 23743 - Proposed Information Collection; Comment Request; Generic Clearance for Questionnaire Pretesting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ...; Generic Clearance for Questionnaire Pretesting Research AGENCY: Census Bureau, Commerce. ACTION: Notice.... This research program will be used by the Census Bureau and survey sponsors to improve questionnaires... involve one of the following methods of identifying measurement problems with the questionnaire or survey...

  17. Doing Developmental Research: A Practical Guide

    ERIC Educational Resources Information Center

    Striano, Tricia

    2016-01-01

    Addressing practical issues rarely covered in methods texts, this user-friendly, jargon-free book helps students and beginning researchers plan infant and child development studies and get them done. The author provides step-by-step guidance for getting involved in a developmental laboratory and crafting effective research questions and proposals.…

  18. Thin film processing of photorefractive BaTiO3

    NASA Technical Reports Server (NTRS)

    Schuster, Paul R.; Potember, Richard S.

    1991-01-01

    The principle objectives of this ongoing research involve the preparation and characterization of polycrystalline single-domain thin films of BaTiO3 for photorefractive applications. These films must be continuous, free of cracks, and of high optical quality. The two methods proposed are sputtering and sol-gel related processing.

  19. An Extension of Multiple Correspondence Analysis for Identifying Heterogeneous Subgroups of Respondents

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Montreal, Hec; Dillon, William R.; Takane, Yoshio

    2006-01-01

    An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…

  20. New Pedagogical Approaches to Improve Production of Materials in Distance Education.

    ERIC Educational Resources Information Center

    Mena, Marta

    1992-01-01

    Analyzes problems involved in the production of instructional materials for distance education and offers new pedagogical approaches to improve production of materials for distance education. Discusses past, present, and future methods used to design instructional materials, proposes models to aid in the production of instructional materials, and…

  1. Confidence Wagering during Mathematics and Science Testing

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Hoan-Lin; Shymansky, James A.

    2009-01-01

    This proposal presents the results of a case study involving five 8th grade Taiwanese classes, two mathematics and three science classes. These classes used a new method of testing called confidence wagering. This paper advocates the position that confidence wagering can predict the accuracy of a student's test answer selection during…

  2. Change in Classroom Relations: An Attempt that Signals Some Difficulties.

    ERIC Educational Resources Information Center

    Gutierrez, Roberto

    2002-01-01

    The instructor of a human resource class proposed a different division of labor between teacher and students. Analysis of four critical class incidents (essay sharing, class discussion, prejudices involved in a student presentation, student objections to course methods) showed that students preferred to preserve their identity as consumers and…

  3. Misconduct in the Prosecution of Severe Crimes: Theory and Experimental Test

    ERIC Educational Resources Information Center

    Lucas, Jeffrey W.; Graif, Corina; Lovaglia, Michael J.

    2006-01-01

    Prosecutorial misconduct involves the intentional use of illegal or improper methods for attaining convictions against defendants in criminal trials. Previous research documented extensive errors in the prosecution of severe crimes. A theory formulated to explain this phenomenon proposes that in serious cases, increased pressure to convict…

  4. Composite Indices of Development and Poverty: An Application to MDGs

    ERIC Educational Resources Information Center

    De Muro, Pasquale; Mazziotta, Matteo; Pareto, Adriano

    2011-01-01

    The measurement of development or poverty as multidimensional phenomena is very difficult because there are several theoretical, methodological and empirical problems involved. The literature of composite indicators offers a wide variety of aggregation methods, all with their pros and cons. In this paper, we propose a new, alternative composite…

  5. Modal smoothing for analysis of room reflections measured with spherical microphone and loudspeaker arrays.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz

    2018-02-01

    Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.

  6. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    PubMed

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  7. Post processing of protein-compound docking for fragment-based drug discovery (FBDD): in-silico structure-based drug screening and ligand-binding pose prediction.

    PubMed

    Fukunishi, Yoshifumi

    2010-01-01

    For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.

  8. Can electronic medical images replace hard-copy film? Defining and testing the equivalence of diagnostic tests.

    PubMed

    Obuchowski, N A

    2001-10-15

    Electronic medical images are an efficient and convenient format in which to display, store and transmit radiographic information. Before electronic images can be used routinely to screen and diagnose patients, however, it must be shown that readers have the same diagnostic performance with this new format as traditional hard-copy film. Currently, there exist no suitable definitions of diagnostic equivalence. In this paper we propose two criteria for diagnostic equivalence. The first criterion ('population equivalence') considers the variability between and within readers, as well as the mean reader performance. This criterion is useful for most applications. The second criterion ('individual equivalence') involves a comparison of the test results for individual patients and is necessary when patients are followed radiographically over time. We present methods for testing both individual and population equivalence. The properties of the proposed methods are assessed in a Monte Carlo simulation study. Data from a mammography screening study is used to illustrate the proposed methods and compare them with results from more conventional methods of assessing equivalence and inter-procedure agreement. Copyright 2001 John Wiley & Sons, Ltd.

  9. A Prediction Model for Functional Outcomes in Spinal Cord Disorder Patients Using Gaussian Process Regression.

    PubMed

    Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid

    2016-01-01

    Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.

  10. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  11. Rheological properties, shape oscillations, and coalescence of liquid drops with surfactants

    NASA Technical Reports Server (NTRS)

    Apfel, R. E.; Holt, R. G.

    1990-01-01

    A method was developed to deduce dynamic interfacial properties of liquid drops. The method involves measuring the frequency and damping of free quadrupole oscillations of an acoustically levitated drop. Experimental results from pure liquid-liquid systems agree well with theoretical predictions. Additionally, the effects of surfactants is considered. Extension of these results to a proposed microgravity experiment on the drop physics module (DPM) in USML-1 are discussed. Efforts are also underway to model the time history of the thickness of the fluid layer between two pre-coalescence drops, and to measure the film thickness experimentally. Preliminary results will be reported, along with plans for coalescence experiments proposed for USML-1.

  12. A Grammatical Approach to RNA-RNA Interaction Prediction

    NASA Astrophysics Data System (ADS)

    Kato, Yuki; Akutsu, Tatsuya; Seki, Hiroyuki

    2007-11-01

    Much attention has been paid to two interacting RNA molecules involved in post-transcriptional control of gene expression. Although there have been a few studies on RNA-RNA interaction prediction based on dynamic programming algorithm, no grammar-based approach has been proposed. The purpose of this paper is to provide a new modeling for RNA-RNA interaction based on multiple context-free grammar (MCFG). We present a polynomial time parsing algorithm for finding the most likely derivation tree for the stochastic version of MCFG, which is applicable to RNA joint secondary structure prediction including kissing hairpin loops. Also, elementary tests on RNA-RNA interaction prediction have shown that the proposed method is comparable to Alkan et al.'s method.

  13. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    PubMed

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  14. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  15. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    PubMed

    Li, Der-Chiang; Hu, Susan C; Lin, Liang-Sian; Yeh, Chun-Wu

    2017-01-01

    It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP) method merging in the D3C method (PPDP+D3C) with those of the one-sided selection (OSS), the well-known SMOTEBoost (SB) study, and the normal distribution-based oversampling (NDO) approach, and the proposed data pre-processing (PPDP) method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  16. Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method

    NASA Astrophysics Data System (ADS)

    Lee, G.; Jun, K. S.; Chung, E.-S.

    2015-04-01

    This study proposes an improved group decision making (GDM) framework that combines the VIKOR method with data fuzzification to quantify the spatial flood vulnerability including multiple criteria. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. This approach effectively can propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the southern Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the spatial flood vulnerability using general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods (i.e., Borda, Condorcet, and Copeland). As a result, the proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.

  17. a New Model for Fuzzy Personalized Route Planning Using Fuzzy Linguistic Preference Relation

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Houshyaripour, A. H.

    2017-09-01

    This paper proposes a new model for personalized route planning under uncertain condition. Personalized routing, involves different sources of uncertainty. These uncertainties can be raised from user's ambiguity about their preferences, imprecise criteria values and modelling process. The proposed model uses Fuzzy Linguistic Preference Relation Analytical Hierarchical Process (FLPRAHP) to analyse user's preferences under uncertainty. Routing is a multi-criteria task especially in transportation networks, where the users wish to optimize their routes based on different criteria. However, due to the lake of knowledge about the preferences of different users and uncertainties available in the criteria values, we propose a new personalized fuzzy routing method based on the fuzzy ranking using center of gravity. The model employed FLPRAHP method to aggregate uncertain criteria values regarding uncertain user's preferences while improve consistency with least possible comparisons. An illustrative example presents the effectiveness and capability of the proposed model to calculate best personalize route under fuzziness and uncertainty.

  18. Discontinuous Finite Element Quasidiffusion Methods

    DOE PAGES

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    2018-05-21

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  19. Discontinuous Finite Element Quasidiffusion Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  20. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    PubMed Central

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  1. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  2. Comparison of fusion methods from the abstract level and the rank level in a dispersed decision-making system

    NASA Astrophysics Data System (ADS)

    Przybyła-Kasperek, M.; Wakulicz-Deja, A.

    2017-05-01

    Issues related to decision making based on dispersed knowledge are discussed in the paper. A dispersed decision-making system, which was proposed by the authors in previous articles, is used in this paper. In the system, a process of combining classifiers into coalitions with a negotiation stage is realized. The novelty that is proposed in this article involves the use of six different methods of conflict analysis that are known from the literature.The main purpose of the tests, which were performed, was to compare the methods from the two groups - the abstract level and the rank level. An additional aim was to investigate the efficiency of the fusion methods used in a dispersed system with a dynamic structure with the efficiency that is obtained when no structure is used. Conclusions were drawn that, in most cases, the use of a dispersed system improves the efficiency of inference.

  3. Laser-plasma interactions with a Fourier-Bessel particle-in-cell method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andriyash, Igor A., E-mail: igor.andriyash@gmail.com; LOA, ENSTA ParisTech, CNRS, Ecole polytechnique, Université Paris-Saclay, 828 bd des Maréchaux, 91762 Palaiseau cedex; Lehe, Remi

    A new spectral particle-in-cell (PIC) method for plasma modeling is presented and discussed. In the proposed scheme, the Fourier-Bessel transform is used to translate the Maxwell equations to the quasi-cylindrical spectral domain. In this domain, the equations are solved analytically in time, and the spatial derivatives are approximated with high accuracy. In contrast to the finite-difference time domain (FDTD) methods, that are used commonly in PIC, the developed method does not produce numerical dispersion and does not involve grid staggering for the electric and magnetic fields. These features are especially valuable in modeling the wakefield acceleration of particles in plasmas.more » The proposed algorithm is implemented in the code PLARES-PIC, and the test simulations of laser plasma interactions are compared to the ones done with the quasi-cylindrical FDTD PIC code CALDER-CIRC.« less

  4. Reproducing Quantum Probability Distributions at the Speed of Classical Dynamics: A New Approach for Developing Force-Field Functors.

    PubMed

    Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán

    2018-04-05

    Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.

  5. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  6. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.

  7. A New Method to Cross Calibrate and Validate TOMS, SBUV/2, and SCIAMACHY Measurements

    NASA Technical Reports Server (NTRS)

    Ahmad, Ziauddin; Hilsenrath, Ernest; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A unique method to validate back scattered ultraviolet (buv) type satellite data that complements the measurements from existing ground networks is proposed. The method involves comparing the zenith sky radiance measurements from the ground to the nadir radiance measurements taken from space. Since the measurements are compared directly, the proposed method is superior to any other method that involves comparing derived products (for example, ozone), because comparison of derived products involve inversion algorithms which are susceptible to several type of errors. Forward radiative transfer (RT) calculations show that for an aerosol free atmosphere, the ground-based zenith sky radiance measurement and the satellite nadir radiance measurements can be predicted with an accuracy of better than 1 percent. The RT computations also show that for certain values of the solar zenith angles, the radiance comparisons could be better than half a percent. This accuracy is practically independent of ozone amount and aerosols in the atmosphere. Experiences with the Shuttle Solar Backscatter Ultraviolet (SSBUV) program show that the accuracy of the ground-based zenith sky radiance measuring instrument can be maintained at a level of a few tenth of a percent. This implies that the zenith sky radiance measurements can be used to validate Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet (SBUV/2), and The SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) radiance data. Also, this method will help improve the long term precision of the measurements for better trend detection and the accuracy of other BUV products such as tropospheric ozone and aerosols. Finally, in the long term, this method is a good candidate to inter-calibrate and validate long term observations of upcoming operational instruments such as Global Ozone Monitoring Experiment (GOME-2), Ozone Mapping Instrument (OMI), Ozone Dynamics Ultraviolet Spectrometer (ODUS), and Ozone Mapping and Profiler Suite (OMPS).

  8. 45 CFR 2102.10 - Timing, scope and content of submissions for proposed projects involving land, buildings, or...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... proposed projects involving land, buildings, or other structures. 2102.10 Section 2102.10 Public Welfare... for proposed projects involving land, buildings, or other structures. (a) A party proposing a project... historical information about the building or other structure to be altered or razed; (ii) The identity of the...

  9. Efficient dual approach to distance metric learning.

    PubMed

    Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton

    2014-02-01

    Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  10. Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines

    PubMed Central

    Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi

    2016-01-01

    The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures. PMID:27023563

  11. Prompt identification of tsunamigenic earthquakes from 3-component seismic data

    NASA Astrophysics Data System (ADS)

    Kundu, Ajit; Bhadauria, Y. S.; Basu, S.; Mukhopadhyay, S.

    2016-10-01

    An Artificial Neural Network (ANN) based algorithm for prompt identification of shallow focus (depth < 70 km) tsunamigenic earthquakes at a regional distance is proposed in the paper. The promptness here refers to decision making as fast as 5 min after the arrival of LR phase in the seismogram. The root mean square amplitudes of seismic phases recorded by a single 3-component station have been considered as inputs besides location and magnitude. The trained ANN has been found to categorize 100% of the new earthquakes successfully as tsunamigenic or non-tsunamigenic. The proposed method has been corroborated by an alternate mapping technique of earthquake category estimation. The second method involves computation of focal parameters, estimation of water volume displaced at the source and eventually deciding category of the earthquake. The method has been found to identify 95% of the new earthquakes successfully. Both the methods have been tested using three component broad band seismic data recorded at PALK (Pallekele, Sri Lanka) station provided by IRIS for earthquakes originating from Sumatra region of magnitude 6 and above. The fair agreement between the methods ensures that a prompt alert system could be developed based on proposed method. The method would prove to be extremely useful for the regions that are not adequately instrumented for azimuthal coverage.

  12. Determination of trace nickel in hydrogenated cottonseed oil by electrothermal atomic absorption spectrometry after microwave-assisted digestion.

    PubMed

    Zhang, Gai

    2012-01-01

    Microwave digestion of hydrogenated cottonseed oil prior to trace nickel determination by electrothermal atomic absorption spectrometry (ETAAS) is proposed here for the first time. Currently, the methods outlined in U.S. Pharmacopeia 28 (USP28) or British Pharmacopeia (BP2003) are recommended as the official methods for analyzing nickel in hydrogenated cottonseed oil. With these methods the samples may be pre-treated by a silica or a platinum crucible. However, the samples were easily tarnished during sample pretreatment when using a silica crucible. In contrast, when using a platinum crucible, hydrogenated cottonseed oil acting as a reducing material may react with the platinum and destroy the crucible. The proposed microwave-assisted digestion avoided tarnishing of sample in the process of sample pretreatment and also reduced the cycle of analysis. The programs of microwave digestion and the parameters of ETAAS were optimized. The accuracy of the proposed method was investigated by analyzing real samples. The results were compared with the ones by pressurized-PTFE-bomb acid digestion and ones obtained by the U.S. Pharmacopeia 28 (USP28) method. The new method involves a relatively rapid matrix destruction technique compared with other present methods for the quantification of metals in oil. © 2011 Institute of Food Technologists®

  13. Advancing Detached-Eddy Simulation

    DTIC Science & Technology

    2007-01-01

    fluxes leads to an improvement in the stability of the solution . This matrix is solved iteratively using a symmetric Gauss - Seidel procedure. Newtons sub...model (TLM) is a zonal approach, proposed by Balaras and Benocci (5) and Balaras et al. (4). The method involved the solution of filtered Navier...LES mesh. The method was subsequently used by Cabot (6) and Diurno et al. (7) to obtain the solution of the flow over a backward facing step and by

  14. Determination of dipyrone in pharmaceutical preparations based on the chemiluminescent reaction of the quinolinic hydrazide-H2O2-vanadium(IV) system and flow-injection analysis.

    PubMed

    Pradana Pérez, Juan A; Durand Alegría, Jesús S; Hernando, Pilar Fernández; Sierra, Adolfo Narros

    2012-01-01

    A rapid, economic and sensitive chemiluminescent method involving flow-injection analysis was developed for the determination of dipyrone in pharmaceutical preparations. The method is based on the chemiluminescent reaction between quinolinic hydrazide and hydrogen peroxide in a strongly alkaline medium, in which vanadium(IV) acts as a catalyst. Principal chemical and physical variables involved in the flow-injection system were optimized using a modified simplex method. The variations in the quantum yield observed when dipyrone was present in the reaction medium were used to determine the concentration of this compound. The proposed method requires no preconcentration steps and reliably quantifies dipyrone over the linear range 1-50 µg/mL. In addition, a sample throughput of 85 samples/h is possible. Copyright © 2011 John Wiley & Sons, Ltd.

  15. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  16. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  17. Multiple imputation by chained equations for systematically and sporadically missing multilevel data.

    PubMed

    Resche-Rigon, Matthieu; White, Ian R

    2018-06-01

    In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.

  18. Localization of synchronous cortical neural sources.

    PubMed

    Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc

    2013-03-01

    Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.

  19. A visual model for object detection based on active contours and level-set method.

    PubMed

    Satoh, Shunji

    2006-09-01

    A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.

  20. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  1. Validated spectrophotometric methods for determination of Alendronate sodium in tablets through nucleophilic aromatic substitution reactions

    PubMed Central

    2012-01-01

    Background Alendronate (ALD) is a member of the bisphosphonate family which is used for the treatment of osteoporosis, bone metastasis, Paget's disease, hypocalcaemia associated with malignancy and other conditions that feature bone fragility. ALD is a non-chromophoric compound so its determination by conventional spectrophotometric methods is not possible. So two derivatization reactions were proposed for determination of ALD through the reaction with 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole (NBD-Cl) and 2,4-dinitrofluorobenzene (DNFB) as chromogenic derivatizing reagents. Results Three simple and sensitive spectrophotometric methods are described for the determination of ALD. Method I is based on the reaction of ALD with NBD-Cl. Method II involved heat-catalyzed derivatization of ALD with DNFB, while, Method III is based on micellar-catalyzed reaction of the studied drug with DNFB at room temperature. The reactions products were measured at 472, 378 and 374 nm, for methods I, II and III, respectively. Beer's law was obeyed over the concentration ranges of 1.0-20.0, 4.0-40.0 and 1.5-30.0 μg/mL with lower limits of detection of 0.09, 1.06 and 0.06 μg/mL for Methods I, II and III, respectively. The proposed methods were applied for quantitation of the studied drug in its pure form with mean percentage recoveries of 100.47 ± 1.12, 100.17 ± 1.21 and 99.23 ± 1.26 for Methods I, II and III, respectively. Moreover the proposed methods were successfully applied for determination of ALD in different tablets. Proposals of the reactions pathways have been postulated. Conclusion The proposed spectrophotometric methods provided sensitive, specific and inexpensive analytical procedures for determination of the non-chromophoric drug alendronate either per se or in its tablet dosage forms without interference from common excipients. Graphical abstract PMID:22472190

  2. Validated spectrophotometric methods for determination of Alendronate sodium in tablets through nucleophilic aromatic substitution reactions.

    PubMed

    Walash, Mohamed I; Metwally, Mohamed E-S; Eid, Manal; El-Shaheny, Rania N

    2012-04-02

    Alendronate (ALD) is a member of the bisphosphonate family which is used for the treatment of osteoporosis, bone metastasis, Paget's disease, hypocalcaemia associated with malignancy and other conditions that feature bone fragility. ALD is a non-chromophoric compound so its determination by conventional spectrophotometric methods is not possible. So two derivatization reactions were proposed for determination of ALD through the reaction with 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole (NBD-Cl) and 2,4-dinitrofluorobenzene (DNFB) as chromogenic derivatizing reagents. Three simple and sensitive spectrophotometric methods are described for the determination of ALD. Method I is based on the reaction of ALD with NBD-Cl. Method II involved heat-catalyzed derivatization of ALD with DNFB, while, Method III is based on micellar-catalyzed reaction of the studied drug with DNFB at room temperature. The reactions products were measured at 472, 378 and 374 nm, for methods I, II and III, respectively. Beer's law was obeyed over the concentration ranges of 1.0-20.0, 4.0-40.0 and 1.5-30.0 μg/mL with lower limits of detection of 0.09, 1.06 and 0.06 μg/mL for Methods I, II and III, respectively. The proposed methods were applied for quantitation of the studied drug in its pure form with mean percentage recoveries of 100.47 ± 1.12, 100.17 ± 1.21 and 99.23 ± 1.26 for Methods I, II and III, respectively. Moreover the proposed methods were successfully applied for determination of ALD in different tablets. Proposals of the reactions pathways have been postulated. The proposed spectrophotometric methods provided sensitive, specific and inexpensive analytical procedures for determination of the non-chromophoric drug alendronate either per se or in its tablet dosage forms without interference from common excipients. GRAPHICAL

  3. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  4. Prostate multimodality image registration based on B-splines and quadrature local energy.

    PubMed

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  5. An Alternative Approach for Nonlinear Latent Variable Models

    ERIC Educational Resources Information Center

    Mooijaart, Ab; Bentler, Peter M.

    2010-01-01

    In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…

  6. Student Motivation for Involvement in Supervised Agricultural Experiences: An Historical Perspective

    ERIC Educational Resources Information Center

    Bird, William A.; Martin, Michael J.; Simonsen, Jon C.

    2013-01-01

    The purpose of this study was to examine student motivation for SAEs through the lens of the Self-Determination Theory. Self-Determination Theory proposed that human beings are more genuinely motivated when driven by internal factors as opposed to external factors. We used historical research and general qualitative interpretative methods to…

  7. A Proposal for Facilitating More Cooperation in Competitive Sports

    ERIC Educational Resources Information Center

    Jacobs, George M.; Teh, Jiexin; Spencer, Leonora

    2017-01-01

    This article utilises theories, methods and tools from the fields of Social Psychology and Education to suggest new metrics for the analysis of competitive sport. The hope is that these metrics will encourage cooperation to exist alongside of the dominant feelings of competition. The main theory from Social Psychology involved here is Social…

  8. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  9. Gaming as a Method for Learning to Resolve Ethical Dilemmas in Long Term Care.

    ERIC Educational Resources Information Center

    Wilson, Cindy C.; And Others

    1988-01-01

    The Simulation Game is proposed as a means of sensitizing professionals to problems and dilemmas of key team members (social workers, nurses, health educators, physicians, and clinical psychologists) in geriatric health care. The game involves role playing from cards which present difficult issues and cases in such care. (CB)

  10. Reduced Amygdalar Gray Matter Volume in Familial Pediatric Bipolar Disorder

    ERIC Educational Resources Information Center

    Chang, Kiki; Karchemskiy, Asya; Barnea-Goraly, Naama; Garrett, Amy; Simeonova, Diana Iorgova; Reiss, Allan

    2005-01-01

    Objective: Subcortical limbic structures have been proposed to be involved in the pathophysiology of adult and pediatric bipolar disorder (BD). We sought to study morphometric characteristics of these structures in pediatric subjects with familial BD compared with healthy controls. Method: Twenty children and adolescents with BD I (mean age = 14.6…

  11. 77 FR 47361 - Proposed Information Collection; Comment Request; 2013 Alternative Contact Strategy Test

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-08

    ... research will be conducted through a series of projects and tests throughout the decade. Contact involving... 2020 Research and Testing Project tests and design options for the 2020 Census. II. Method of... Alternative Contact Strategy Test is the first test to support this research. The Census Bureau will test...

  12. 76 FR 52034 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... a period of time greater than one day because mathematical compounding prevents the Funds from... not actively managed by traditional methods, which typically involve effecting changes in the... mathematical approach to determine the type, quantity, and mix of investment positions that it believes should...

  13. Robust Vehicle Detection under Various Environmental Conditions Using an Infrared Thermal Camera and Its Application to Road Traffic Flow Monitoring

    PubMed Central

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-01-01

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988

  14. On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.

    PubMed

    Casas, Fernando

    2010-10-21

    Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.

  15. Porosity estimation of aged mortar using a micromechanical model.

    PubMed

    Hernández, M G; Anaya, J J; Sanchez, T; Segura, I

    2006-12-22

    Degradation of concrete structures located in high humidity atmospheres or under flowing water is a very important problem. In this study, a method for ultrasonic non-destructive characterization in aged mortar is presented. The proposed method makes a prediction of the behaviour of aged mortar accomplished with a three phase micromechanical model using ultrasonic measurements. Aging mortar was accelerated by immersing the probes in ammonium nitrate solution. Both destructive and non-destructive characterization of mortar was performed. Destructive tests of porosity were performed using a vacuum saturation method and non-destructive characterization was carried out using ultrasonic velocities. Aging experiments show that mortar degradation not only involves a porosity increase, but also microstructural changes in the cement matrix. Experimental results show that the estimated porosity using the proposed non-destructive methodology had a comparable performance to classical destructive techniques.

  16. Validation of a method for assessing resident physicians' quality improvement proposals.

    PubMed

    Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S

    2007-09-01

    Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

  17. Negotiating behavioural change: therapists' proposal turns in Cognitive Behavioural Therapy.

    PubMed

    Ekberg, Katie; Lecouteur, Amanda

    2012-01-01

    Cognitive behavioural therapy (CBT) is an internationally recognised method for treating depression. However, many of the techniques involved in CBT are accomplished within the therapy interaction in diverse ways, and with varying consequences for the trajectory of therapy session. This paper uses conversation analysis to examine some standard ways in which therapists propose suggestions for behavioural change to clients attending CBT sessions for depression in Australia. Therapists' proposal turns displayed their subordinate epistemic authority over the matter at hand, and emphasised a high degree of optionality on behalf of the client in accepting their suggestions. This practice was routinely accomplished via three standard proposal turns: (1) hedged recommendations; (2) interrogatives; and (3) information-giving. These proposal turns will be examined in relation to the negotiation of behavioural change, and the implications for CBT interactions between therapist and client will be discussed.

  18. Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.

    PubMed

    Suk, Heejun

    2016-07-01

    MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes. © 2016, National Ground Water Association.

  19. MO-FG-CAMPUS-IeP2-04: Multiple Penalties with Different Orders for Structure Adaptive CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Q; Cheng, P; Tan, S

    2016-06-15

    Purpose: To combine total variation (TV) and Hessian penalty in a structure adaptive way for cone-beam CT (CBCT) reconstruction. Methods: TV is a widely used first order penalty with good ability in suppressing noise and preserving edges but leads to the staircase effect in regions with smooth intensity transition. The second order Hessian penalty can effectively suppress the staircase effect with extra cost of blurring object edges. To take the best of both penalties we proposed a novel method to combine both for CBCT reconstruction in a structure adaptive way. The proposed method adaptively determined the weight of each penaltymore » according to the geometry of local regions. An specially-designed exponent term with image gradient involved was used to characterize the local geometry such that the weights for Hessian and TV were 1 and 0 respectively at uniform local regions and 0 and 1 at edge regions. For other local regions the weights varied from 0 to 1. The objective functional was minimized using the majorzationminimization approach. We evaluated the proposed method on a modified 3D shepp-logan and a CatPhan 600 phantom. The full-width-at-halfmaximum (FWHM) and contrast-to-noise (CNR) were calculated. Results: For 3D shepp-logan the reconstructed images using TV had an obvious staircase effect while those using the proposed method and Hessian preserved the smooth transition regions well. FWHMs of the proposed method TV and Hessian penalty were 1.75 1.61 and 3.16 respectively, indicating that both TV and the proposed method is able to preserve edges. For CatPhan 600 CNR values of the proposed method were similar to those of TV and Hessian. Conclusion: The proposed method retains favorable properties of TV like preserving edges and also has the ability in better preserving gradual transition structure as Hessian does. All methods performs similarly in suppressing noise. This work was supported in part by National Natural Science Foundation of China (NNSFC) under Grant Nos.60971112 and 61375018 grants from the Cancer Prevention and Research Institute of Texas (RP130109 and RP110562-P2) National Institute of Biomedical Imaging and Bioengineering (R01 EB020366) and a grant from the American Cancer Society (RSG-13-326-01-CCE).« less

  20. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  1. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  2. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    PubMed Central

    Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong

    2018-01-01

    Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307

  3. Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.

    PubMed

    Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M

    2015-03-06

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.

  4. Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'

    PubMed Central

    Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.

    2015-01-01

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468

  5. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  6. A simple and efficient method for deriving neurospheres from bone marrow stromal cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Qin; Mu Jun; Li Qi

    2008-08-08

    Bone marrow stromal cells (MSCs) can be differentiated into neuronal and glial-like cell types under appropriate experimental conditions. However, previously reported methods are complicated and involve the use of toxic reagents. Here, we present a simplified and nontoxic method for efficient conversion of rat MSCs into neurospheres that express the neuroectodermal marker nestin. These neurospheres can proliferate and differentiate into neuron, astrocyte, and oligodendrocyte phenotypes. We thus propose that MSCs are an emerging model cell for the treatment of a variety of neurological diseases.

  7. Methods Used to Support a Life Cycle of Complex Engineering Products

    NASA Astrophysics Data System (ADS)

    Zakharova, Alexandra A.; Kolegova, Olga A.; Nekrasova, Maria E.; Eremenko, Andrey O.

    2016-08-01

    Management of companies involved in the design, development and operation of complex engineering products recognize the relevance of creating systems for product lifecycle management. A system of methods is proposed to support life cycles of complex engineering products, based on fuzzy set theory and hierarchical analysis. The system of methods serves to demonstrate the grounds for making strategic decisions in an environment of uncertainty, allows the use of expert knowledge, and provides interconnection of decisions at all phases of strategic management and all stages of a complex engineering product lifecycle.

  8. Gender Recognition Method Using Near Infrared Ray Spectral Characteristics of Narrow Band

    NASA Astrophysics Data System (ADS)

    Nishino, Satoshi

    Male and female recognition is necessary to make security stronger and when various statistics on the visitor are taken in commercial facilities and so on. The conventional method of male and female recognition is currently determined by using the person's dress and in such cases, the way of walking, the foot pressure, the hair type. But, these characteristics can be intentionally changed by human intervention or design. The proposed method obtains a difference in the male's and female's characteristics by taking absorbance characteristics of the fat distribution of the person's cheek by near infrared ray scanning spectrophotometer. This is a male and female recognition based on the new concept idea which this is used for. Consequently, this can be used to recognize a male from a female even if a male turns himself into the female intentionally (and vice versa), because this method involves biometrics authentication. Therefore, the proposed method will be applied to the security system.

  9. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  10. An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia

    2018-05-01

    The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.

  11. Improving human activity recognition and its application in early stroke diagnosis.

    PubMed

    Villar, José R; González, Silvia; Sedano, Javier; Chira, Camelia; Trejo-Gabriel-Galan, Jose M

    2015-06-01

    The development of efficient stroke-detection methods is of significant importance in today's society due to the effects and impact of stroke on health and economy worldwide. This study focuses on Human Activity Recognition (HAR), which is a key component in developing an early stroke-diagnosis tool. An overview of the proposed global approach able to discriminate normal resting from stroke-related paralysis is detailed. The main contributions include an extension of the Genetic Fuzzy Finite State Machine (GFFSM) method and a new hybrid feature selection (FS) algorithm involving Principal Component Analysis (PCA) and a voting scheme putting the cross-validation results together. Experimental results show that the proposed approach is a well-performing HAR tool that can be successfully embedded in devices.

  12. [Clinical bioethics for primary health care].

    PubMed

    González-de Paz, L

    2013-01-01

    The clinical decision making process with ethical implications in the area of primary healthcare differs from other healthcare areas. From the ethical perspective it is important to include these issues in the decision making model. This dissertation explains the need for a process of bioethical deliberation for Primary Healthcare, as well as proposing a method for doing so. The decision process method, adapted to this healthcare area, is flexible and requires a more participative Healthcare System. This proposal involves professionals and the patient population equally, is intended to facilitate the acquisition of responsibility for personal and community health. Copyright © 2012 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.

  13. Multiple directed graph large-class multi-spectral processor

    NASA Technical Reports Server (NTRS)

    Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki

    1988-01-01

    Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.

  14. One lens optical correlation: application to face recognition.

    PubMed

    Jridi, Maher; Napoléon, Thibault; Alfalou, Ayman

    2018-03-20

    Despite its extensive use, the traditional 4f Vander Lugt Correlator optical setup can be further simplified. We propose a lightweight correlation scheme where the decision is taken in the Fourier plane. For this purpose, the Fourier plane is adapted and used as a decision plane. Then, the offline phase and the decision metric are re-examined in order to keep a reasonable recognition rate. The benefits of the proposed approach are numerous: (1) it overcomes the constraints related to the use of a second lens; (2) the optical correlation setup is simplified; (3) the multiplication with the correlation filter can be done digitally, which offers a higher adaptability according to the application. Moreover, the digital counterpart of the correlation scheme is lightened since with the proposed scheme we get rid of the inverse Fourier transform (IFT) calculation (i.e., decision directly in the Fourier domain without resorting to IFT). To assess the performance of the proposed approach, an insight into digital hardware resources saving is provided. The proposed method involves nearly 100 times fewer arithmetic operators. Moreover, from experimental results in the context of face verification-based correlation, we demonstrate that the proposed scheme provides comparable or better accuracy than the traditional method. One interesting feature of the proposed scheme is that it could greatly outperform the traditional scheme for face identification application in terms of sensitivity to face orientation. The proposed method is found to be digital/optical implementation-friendly, which facilitates its integration on a very broad range of scenarios.

  15. Computation of type curves for flow to partially penetrating wells in water-table aquifers

    USGS Publications Warehouse

    Moench, Allen F.

    1993-01-01

    Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.

  16. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  17. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    PubMed Central

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-01-01

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust. PMID:25912350

  18. Computational synchronization of microarray data with application to Plasmodium falciparum.

    PubMed

    Zhao, Wei; Dauwels, Justin; Niles, Jacquin C; Cao, Jianshu

    2012-06-21

    Microarrays are widely used to investigate the blood stage of Plasmodium falciparum infection. Starting with synchronized cells, gene expression levels are continually measured over the 48-hour intra-erythrocytic cycle (IDC). However, the cell population gradually loses synchrony during the experiment. As a result, the microarray measurements are blurred. In this paper, we propose a generalized deconvolution approach to reconstruct the intrinsic expression pattern, and apply it to P. falciparum IDC microarray data. We develop a statistical model for the decay of synchrony among cells, and reconstruct the expression pattern through statistical inference. The proposed method can handle microarray measurements with noise and missing data. The original gene expression patterns become more apparent in the reconstructed profiles, making it easier to analyze and interpret the data. We hypothesize that reconstructed gene expression patterns represent better temporally resolved expression profiles that can be probabilistically modeled to match changes in expression level to IDC transitions. In particular, we identify transcriptionally regulated protein kinases putatively involved in regulating the P. falciparum IDC. By analyzing publicly available microarray data sets for the P. falciparum IDC, protein kinases are ranked in terms of their likelihood to be involved in regulating transitions between the ring, trophozoite and schizont developmental stages of the P. falciparum IDC. In our theoretical framework, a few protein kinases have high probability rankings, and could potentially be involved in regulating these developmental transitions. This study proposes a new methodology for extracting intrinsic expression patterns from microarray data. By applying this method to P. falciparum microarray data, several protein kinases are predicted to play a significant role in the P. falciparum IDC. Earlier experiments have indeed confirmed that several of these kinases are involved in this process. Overall, these results indicate that further functional analysis of these additional putative protein kinases may reveal new insights into how the P. falciparum IDC is regulated.

  19. Some issues related to simulation of the tracking and communications computer network

    NASA Technical Reports Server (NTRS)

    Lacovara, Robert C.

    1989-01-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  20. Some issues related to simulation of the tracking and communications computer network

    NASA Astrophysics Data System (ADS)

    Lacovara, Robert C.

    1989-12-01

    The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.

  1. Rh(II)-catalyzed Reactions of Diazoesters with Organozinc Reagents

    PubMed Central

    Panish, Robert; Selvaraj, Ramajeyam; Fox, Joseph M.

    2015-01-01

    Rh(II)-catalyzed reactions of diazoesters with organozinc reagents are described. Diorganozinc reagents participate in reactions with diazo compounds by two distinct, catalyst-dependent mechanisms. With bulky diisopropylethylacetate ligands, the reaction mechanism is proposed to involve initial formation of a Rh-carbene and subsequent carbozincation to give a zinc enolate. With Rh2(OAc)4, it is proposed that initial formation of an azine precedes 1,2-addition by an organozinc reagent. This straightforward route to the hydrazone products provides a useful method for preparing chiral quaternary α-aminoesters or pyrazoles via the Paul-Knorr condensation with 1,3-diketones. Crossover and deuterium labeling experiments provide evidence for the mechanisms proposed. PMID:26241081

  2. Rh(II)-Catalyzed Reactions of Diazoesters with Organozinc Reagents.

    PubMed

    Panish, Robert; Selvaraj, Ramajeyam; Fox, Joseph M

    2015-08-21

    Rh(II)-catalyzed reactions of diazoesters with organozinc reagents are described. Diorganozinc reagents participate in reactions with diazo compounds by two distinct, catalyst-dependent mechanisms. With bulky diisopropylethyl acetate ligands, the reaction mechanism is proposed to involve initial formation of a Rh-carbene and subsequent carbozincation to give a zinc enolate. With Rh2(OAc)4, it is proposed that initial formation of an azine precedes 1,2-addition by an organozinc reagent. This straightforward route to the hydrazone products provides a useful method for preparing chiral quaternary α-aminoesters or pyrazoles via the Paul-Knorr condensation with 1,3-diketones. Crossover and deuterium labeling experiments provide evidence for the mechanisms proposed.

  3. Low-loss ultracompact optical power splitter using a multistep structure.

    PubMed

    Huang, Zhe; Chan, Hau Ping; Afsar Uddin, Mohammad

    2010-04-01

    We propose a low-loss ultracompact optical power splitter for broadband passive optical network applications. The design is based on a multistep structure involving a two-material (core/cladding) system. The performance of the proposed device was evaluated through the three-dimensional finite-difference beam propagation method. By using the proposed design, an excess loss of 0.4 dB was achieved at a full branching angle of 24 degrees. The wavelength-dependent loss was found to be less than 0.3 dB, and the polarization-dependent loss was less than 0.05 dB from O to L bands. The device offers the potential of being mass-produced using low-cost polymer-based embossing techniques.

  4. Ising Processing Units: Potential and Challenges for Discrete Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffrin, Carleton James; Nagarajan, Harsha; Bent, Russell Whitford

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one examplemore » of a commercially available Ising processing unit.« less

  5. A Selective Review of Group Selection in High-Dimensional Models

    PubMed Central

    Huang, Jian; Breheny, Patrick; Ma, Shuangge

    2013-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study. PMID:24174707

  6. The analysis of carbohydrates in milk powder by a new "heart-cutting" two-dimensional liquid chromatography method.

    PubMed

    Ma, Jing; Hou, Xiaofang; Zhang, Bing; Wang, Yunan; He, Langchong

    2014-03-01

    In this study, a new"heart-cutting" two-dimensional liquid chromatography method for the simultaneous determination of carbohydrate contents in milk powder was presented. In this two dimensional liquid chromatography system, a Venusil XBP-C4 analysis column was used in the first dimension ((1)D) as a pre-separation column, a ZORBAX carbohydrates analysis column was used in the second dimension ((2)D) as a final-analysis column. The whole process was completed in less than 35min without a particular sample preparation procedure. The capability of the new two dimensional HPLC method was demonstrated in the determination of carbohydrates in various brands of milk powder samples. A conventional one dimensional chromatography method was also proposed. The two proposed methods were both validated in terms of linearity, limits of detection, accuracy and precision. The comparison between the results obtained with the two methods showed that the new and completely automated two dimensional liquid chromatography method is more suitable for milk powder sample because of its online cleanup effect involved. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  7. Information Pre-Processing using Domain Meta-Ontology and Rule Learning System

    NASA Astrophysics Data System (ADS)

    Ranganathan, Girish R.; Biletskiy, Yevgen

    Around the globe, extraordinary amounts of documents are being created by Enterprises and by users outside these Enterprises. The documents created in the Enterprises constitute the main focus of the present chapter. These documents are used to perform numerous amounts of machine processing. While using thesedocuments for machine processing, lack of semantics of the information in these documents may cause misinterpretation of the information, thereby inhibiting the productiveness of computer assisted analytical work. Hence, it would be profitable to the Enterprises if they use well defined domain ontologies which will serve as rich source(s) of semantics for the information in the documents. These domain ontologies can be created manually, semi-automatically or fully automatically. The focus of this chapter is to propose an intermediate solution which will enable relatively easy creation of these domain ontologies. The process of extracting and capturing domain ontologies from these voluminous documents requires extensive involvement of domain experts and application of methods of ontology learning that are substantially labor intensive; therefore, some intermediate solutions which would assist in capturing domain ontologies must be developed. This chapter proposes a solution in this direction which involves building a meta-ontology that will serve as an intermediate information source for the main domain ontology. This chapter proposes a solution in this direction which involves building a meta-ontology as a rapid approach in conceptualizing a domain of interest from huge amount of source documents. This meta-ontology can be populated by ontological concepts, attributes and relations from documents, and then refined in order to form better domain ontology either through automatic ontology learning methods or some other relevant ontology building approach.

  8. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  9. Robust estimation of simulated urinary volume from camera images under bathroom illumination.

    PubMed

    Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji

    2016-08-01

    General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.

  10. The road maintenance funding models in Indonesia use earmarked tax

    NASA Astrophysics Data System (ADS)

    Gultom, Tiopan Henry M.; Tamin, Ofyar Z.; Sjafruddin, Ade; Pradono

    2017-11-01

    One of the solutions to get a sustainable road maintenance fund is to separate road sector revenue from other accounts, afterward, form a specific account for road maintenance. In 2001, Antameng and the Ministry of Public Works proposed a road fund model in Indonesia. Sources of the road funds proposal was a tariff formed on the nominal total tax. The policy of road funds was proposed to finance the road network maintenance of districts and provincials. This research aims to create a policy model of road maintenance funds in Indonesia using an earmarked tax mechanism. The research method is qualitative research, with data collection techniques are triangulation. Interview methods conducted were semi-structured. Strength, Weakness, Opportunities, and Threat from every part of the models were showen on the survey format. Respondents were representative of executives who involved directly against the financing of road maintenance. Validation model conducted by a discussion panel, it was called the Focus Group Discussion (FGD). The FGD involved all selected respondents. Road maintenance financing model that most appropriately applied in Indonesia was a model of revenue source use an earmarked PBBKB, PKB and PPnBM. Revenue collection mechanism was added tariff of registered vehicle tax (PKB), Vehicle Fuel Tax (PBBKB) and the luxury vehicle sales tax (PPnBM). The funds are managed at the provincial level by a public service agency.

  11. A new-old approach for shallow landslide analysis and susceptibility zoning in fine-grained weathered soils of southern Italy

    NASA Astrophysics Data System (ADS)

    Cascini, Leonardo; Ciurleo, Mariantonietta; Di Nocera, Silvio; Gullà, Giovanni

    2015-07-01

    Rainfall-induced shallow landslides involve several geo-environmental contexts and different types of soils. In clayey soils, they affect the most superficial layer, which is generally constituted by physically weathered soils characterised by a diffuse pattern of cracks. This type of landslide most commonly occurs in the form of multiple-occurrence landslide phenomena simultaneously involving large areas and thus has several consequences in terms of environmental and economic damage. Indeed, landslide susceptibility zoning is a relevant issue for land use planning and/or design purposes. This study proposes a multi-scale approach to reach this goal. The proposed approach is tested and validated over an area in southern Italy affected by widespread shallow landslides that can be classified as earth slides and earth slide-flows. Specifically, by moving from a small (1:100,000) to a medium scale (1:25,000), with the aid of heuristic and statistical methods, the approach identifies the main factors leading to landslide occurrence and effectively detects the areas potentially affected by these phenomena. Finally, at a larger scale (1:5000), deterministic methods, i.e., physically based models (TRIGRS and TRIGRS-unsaturated), allow quantitative landslide susceptibility assessment, starting from sample areas representative of those that can be affected by shallow landslides. Considering the reliability of the obtained results, the proposed approach seems useful for analysing other case studies in similar geological contexts.

  12. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    PubMed Central

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  13. Single-shot real-time three dimensional measurement based on hue-height mapping

    NASA Astrophysics Data System (ADS)

    Wan, Yingying; Cao, Yiping; Chen, Cheng; Fu, Guangkai; Wang, Yapin; Li, Chengmeng

    2018-06-01

    A single-shot three-dimensional (3D) measurement based on hue-height mapping is proposed. The color fringe pattern is encoded by three sinusoidal fringes with the same frequency but different shifting phase into red (R), green (G) and blue (B) color channels, respectively. It is found that the hue of the captured color fringe pattern on the reference plane maintains monotonic in one period even it has the color crosstalk. Thus, unlike the traditional color phase shifting technique, the hue information is utilized to decode the color fringe pattern and map to the pixels of the fringe displacement in the proposed method. Because the monotonicity of the hue is limited within one period, displacement unwrapping is proposed to obtain the continuous displacement that is finally used to map to the height distribution. This method directly utilizes the hue under the effect of color crosstalk for mapping the height so that no color calibration is involved. Also, as it requires only single shot deformed color fringe pattern, this method can be applied into the real-time or dynamic 3D measurements.

  14. Based on interval type-2 fuzzy-neural network direct adaptive sliding mode control for SISO nonlinear systems

    NASA Astrophysics Data System (ADS)

    Lin, Tsung-Chih

    2010-12-01

    In this paper, a novel direct adaptive interval type-2 fuzzy-neural tracking control equipped with sliding mode and Lyapunov synthesis approach is proposed to handle the training data corrupted by noise or rule uncertainties for nonlinear SISO nonlinear systems involving external disturbances. By employing adaptive fuzzy-neural control theory, the update laws will be derived for approximating the uncertain nonlinear dynamical system. In the meantime, the sliding mode control method and the Lyapunov stability criterion are incorporated into the adaptive fuzzy-neural control scheme such that the derived controller is robust with respect to unmodeled dynamics, external disturbance and approximation errors. In comparison with conventional methods, the advocated approach not only guarantees closed-loop stability but also the output tracking error of the overall system will converge to zero asymptotically without prior knowledge on the upper bound of the lumped uncertainty. Furthermore, chattering effect of the control input will be substantially reduced by the proposed technique. To illustrate the performance of the proposed method, finally simulation example will be given.

  15. Applying the ecosystem approach to select priority areas for forest landscape restoration in the Yungas, Northwestern Argentina.

    PubMed

    Ianni, Elena; Geneletti, Davide

    2010-11-01

    This paper proposes a method to select forest restoration priority areas consistently with the key principles of the Ecosystem Approach (EA) and the Forest Landscape Restoration (FLR) framework. The methodology is based on the principles shared by the two approaches: acting at ecosystem scale, involving stakeholders, and evaluating alternatives. It proposes the involvement of social actors which have a stake in forest management through multicriteria analysis sessions aimed at identifying the most suitable forest restoration intervention. The method was applied to a study area in the native forests of Northern Argentina (the Yungas). Stakeholders were asked to identify alternative restoration actions, i.e. potential areas implementing FLR. Ten alternative fincas-estates derived from the Spanish land tenure system-differing in relation to ownership, management, land use, land tenure, and size were evaluated. Twenty criteria were selected and classified into four groups: biophysical, social, economic and political. Finca Ledesma was the closest to the economic, social, environmental and political goals, according to the values and views of the actors involved in the decision. This study represented the first attempt to apply EA principles to forest restoration at landscape scale in the Yungas region. The benefits obtained by the application of the method were twofold: on one hand, researchers and local actors were forced to conceive the Yungas as a complex net of rights rather than as a sum of personal interests. On the other hand, the participatory multicriteria approach provided a structured process for collective decision-making in an area where it has never been implemented.

  16. Applying the Ecosystem Approach to Select Priority Areas for Forest Landscape Restoration in the Yungas, Northwestern Argentina

    NASA Astrophysics Data System (ADS)

    Ianni, Elena; Geneletti, Davide

    2010-11-01

    This paper proposes a method to select forest restoration priority areas consistently with the key principles of the Ecosystem Approach (EA) and the Forest Landscape Restoration (FLR) framework. The methodology is based on the principles shared by the two approaches: acting at ecosystem scale, involving stakeholders, and evaluating alternatives. It proposes the involvement of social actors which have a stake in forest management through multicriteria analysis sessions aimed at identifying the most suitable forest restoration intervention. The method was applied to a study area in the native forests of Northern Argentina (the Yungas). Stakeholders were asked to identify alternative restoration actions, i.e. potential areas implementing FLR. Ten alternative fincas—estates derived from the Spanish land tenure system—differing in relation to ownership, management, land use, land tenure, and size were evaluated. Twenty criteria were selected and classified into four groups: biophysical, social, economic and political. Finca Ledesma was the closest to the economic, social, environmental and political goals, according to the values and views of the actors involved in the decision. This study represented the first attempt to apply EA principles to forest restoration at landscape scale in the Yungas region. The benefits obtained by the application of the method were twofold: on one hand, researchers and local actors were forced to conceive the Yungas as a complex net of rights rather than as a sum of personal interests. On the other hand, the participatory multicriteria approach provided a structured process for collective decision-making in an area where it has never been implemented.

  17. A Research-Inspired and Computer-Guided Clinical Interview for Mathematics Assessment: Introduction, Reliability and Validity

    ERIC Educational Resources Information Center

    Ginsburg, Herbert P.; Lee, Young-Sun; Pappas, Sandra

    2016-01-01

    Formative assessment involves the gathering of information that can guide the teaching of individual or groups of children. This approach requires a sound understanding of children's thinking and learning, as well as an effective method for gaining the information. We propose that formative assessment should employ a version of clinical…

  18. Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items

    ERIC Educational Resources Information Center

    Cher Wong, Cheow

    2015-01-01

    Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…

  19. A Case Study Exploring Research Communication and Engagement in a Rural Community Experiencing an Environmental Disaster

    ERIC Educational Resources Information Center

    Winters, Charlene A.; Kuntz, Sandra W.; Weinert, Clarann; Black, Brad

    2014-01-01

    As a means to involve the public in research, the National Institutes of Health (NIH) established the Partners in Research Program and solicited research grant applications from academic/scientific institutions and community organizations that proposed to forge partnerships: (a) to study methods and strategies to engage and inform the public…

  20. Post Viking planetary protection requirements study

    NASA Technical Reports Server (NTRS)

    Wolfson, R. P.

    1977-01-01

    Past planetary quarantine requirements were reviewed in the light of present Viking data to determine the steps necessary to prevent contamination of the Martian surface on future missions. The currently used term planetary protection reflects a broader scope of understanding of the problems involved. Various methods of preventing contamination are discussed in relation to proposed projects, specifically the 1984 Rover Mission.

  1. Self-Help Training System for Nursing Students to Learn Patient Transfer Skills

    ERIC Educational Resources Information Center

    Huang, Zhifeng; Nagata, Ayanori; Kanai-Pak, Masako; Maeda, Jukai; Kitajima, Yasuko; Nakamura, Mitsuhiro; Aida, Kyoko; Kuwahara, Noriaki; Ogata, Taiki; Ota, Jun

    2014-01-01

    This paper describes the construction and evaluation of a self-help skill training system for assisting student nurses in learning skills involving the transfer of patients from beds to wheelchairs. We have proposed a feedback method that is based on a checklist and video demonstrations. To help trainees efficiently check their performance and…

  2. 76 FR 76193 - Applications and Amendments to Facility Operating Licenses Involving Proposed No Significant...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-06

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0275] Applications and Amendments to Facility Operating.... You may submit comments by any one of the following methods: Federal Rulemaking Web Site: Go to http... Information Comments submitted in writing or in electronic form will be posted on the NRC Web site and on the...

  3. What Does a Transformative Lens Bring to Credible Evidence in Mixed Methods Evaluations?

    ERIC Educational Resources Information Center

    Mertens, Donna M.

    2013-01-01

    Credibility in evaluation is a multifaceted concept that involves consideration of diverse stakeholders' perspectives and purposes. The use of a transformative lens is proposed as a means to bringing issues of social justice and human rights to the foreground in decisions about methodology, credibility of evidence, and use of evaluation…

  4. 76 FR 77538 - Family and Youth Services Bureau; Proposed Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ...) evaluation design, which will involve baseline surveys and two follow-up surveys. This will allow short- and... descriptive analysis of how States designed and implemented PREP programs. The study will use multiple methods... ``Design Survey'', will focus on how states designed programs, and the second round of interviews, known as...

  5. Using Digital Logs to Reduce Academic Misdemeanour by Students in Digital Forensic Assessments

    ERIC Educational Resources Information Center

    Lallie, Harjinder Singh; Lawson, Phillip; Day, David J.

    2011-01-01

    Identifying academic misdemeanours and actual applied effort in student assessments involving practical work can be problematic. For instance, it can be difficult to assess the actual effort that a student applied, the sequence and method applied, and whether there was any form of collusion or collaboration. In this paper we propose a system of…

  6. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  7. A simple and inclusive method to determine the habit plane in transmission electron microscope based on accurate measurement of foil thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, Dong, E-mail: d.qiu@uq.edu.au; Zhang, Mingxing

    2014-08-15

    A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrixmore » in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.« less

  8. Link-Based Similarity Measures Using Reachability Vectors

    PubMed Central

    Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin

    2014-01-01

    We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188

  9. A comparison of the environmental impact of different AOPs: risk indexes.

    PubMed

    Giménez, Jaime; Bayarri, Bernardí; González, Óscar; Malato, Sixto; Peral, José; Esplugas, Santiago

    2014-12-31

    Today, environmental impact associated with pollution treatment is a matter of great concern. A method is proposed for evaluating environmental risk associated with Advanced Oxidation Processes (AOPs) applied to wastewater treatment. The method is based on the type of pollution (wastewater, solids, air or soil) and on materials and energy consumption. An Environmental Risk Index (E), constructed from numerical criteria provided, is presented for environmental comparison of processes and/or operations. The Operation Environmental Risk Index (EOi) for each of the unit operations involved in the process and the Aspects Environmental Risk Index (EAj) for process conditions were also estimated. Relative indexes were calculated to evaluate the risk of each operation (E/NOP) or aspect (E/NAS) involved in the process, and the percentage of the maximum achievable for each operation and aspect was found. A practical application of the method is presented for two AOPs: photo-Fenton and heterogeneous photocatalysis with suspended TiO2 in Solarbox. The results report the environmental risks associated with each process, so that AOPs tested and the operations involved with them can be compared.

  10. Empirical comparison study of approximate methods for structure selection in binary graphical models.

    PubMed

    Viallon, Vivian; Banerjee, Onureena; Jougla, Eric; Rey, Grégoire; Coste, Joel

    2014-03-01

    Looking for associations among multiple variables is a topical issue in statistics due to the increasing amount of data encountered in biology, medicine, and many other domains involving statistical applications. Graphical models have recently gained popularity for this purpose in the statistical literature. In the binary case, however, exact inference is generally very slow or even intractable because of the form of the so-called log-partition function. In this paper, we review various approximate methods for structure selection in binary graphical models that have recently been proposed in the literature and compare them through an extensive simulation study. We also propose a modification of one existing method, that is shown to achieve good performance and to be generally very fast. We conclude with an application in which we search for associations among causes of death recorded on French death certificates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Analysis of cohort studies with multivariate and partially observed disease classification data.

    PubMed

    Chatterjee, Nilanjan; Sinha, Samiran; Diver, W Ryan; Feigelson, Heather Spencer

    2010-09-01

    Complex diseases like cancers can often be classified into subtypes using various pathological and molecular traits of the disease. In this article, we develop methods for analysis of disease incidence in cohort studies incorporating data on multiple disease traits using a two-stage semiparametric Cox proportional hazards regression model that allows one to examine the heterogeneity in the effect of the covariates by the levels of the different disease traits. For inference in the presence of missing disease traits, we propose a generalization of an estimating equation approach for handling missing cause of failure in competing-risk data. We prove asymptotic unbiasedness of the estimating equation method under a general missing-at-random assumption and propose a novel influence-function-based sandwich variance estimator. The methods are illustrated using simulation studies and a real data application involving the Cancer Prevention Study II nutrition cohort.

  12. CNN based approach for activity recognition using a wrist-worn accelerometer.

    PubMed

    Panwar, Madhuri; Dyuthi, S Ram; Chandra Prakash, K; Biswas, Dwaipayan; Acharyya, Amit; Maharatna, Koushik; Gautam, Arvind; Naik, Ganesh R

    2017-07-01

    In recent years, significant advancements have taken place in human activity recognition using various machine learning approaches. However, feature engineering have dominated conventional methods involving the difficult process of optimal feature selection. This problem has been mitigated by using a novel methodology based on deep learning framework which automatically extracts the useful features and reduces the computational cost. As a proof of concept, we have attempted to design a generalized model for recognition of three fundamental movements of the human forearm performed in daily life where data is collected from four different subjects using a single wrist worn accelerometer sensor. The validation of the proposed model is done with different pre-processing and noisy data condition which is evaluated using three possible methods. The results show that our proposed methodology achieves an average recognition rate of 99.8% as opposed to conventional methods based on K-means clustering, linear discriminant analysis and support vector machine.

  13. An Exact Formula for Calculating Inverse Radial Lens Distortions

    PubMed Central

    Drap, Pierre; Lefèvre, Julien

    2016-01-01

    This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. PMID:27258288

  14. Novel non-contact control system of electric bed for medical healthcare.

    PubMed

    Lo, Chi-Chun; Tsai, Shang-Ho; Lin, Bor-Shyh

    2017-03-01

    A novel non-contact controller of the electric bed for medical healthcare was proposed in this study. Nowadays, the electric beds are widely used for hospitals and home-care, and the conventional control method of the electric beds usually involves in the manual operation. However, it is more difficult for the disabled and bedridden patients, who might totally depend on others, to operate the conventional electric beds by themselves. Different from the current controlling method, the proposed system provides a new concept of controlling the electric bed via visual stimuli, without manual operation. The disabled patients could operate the electric bed by focusing on the control icons of a visual stimulus tablet in the proposed system. Besides, a wearable and wireless EEG acquisition module was also implemented to monitor the EEG signals of patients. The experimental results showed that the proposed system successfully measured and extracted the EEG features related to visual stimuli, and the disabled patients could operate the adjustable function of the electric bed by themselves to effectively reduce the long-term care burden.

  15. Photodegradation of Paracetamol in Nitrate Solution

    NASA Astrophysics Data System (ADS)

    Meng, Cui; Qu, Ruijuan; Liang, Jinyan; Yang, Xi

    2010-11-01

    The photodegradation of paracetamol in nitrate solution under simulated solar irradiation has been investigated. The degradation rates were compared by varying environmental parameters including concentrations of nitrate ion, humic substance and pH values. The quantifications of paracetamol were conducted by HPLC method. The results demonstrate that the photodegradation of paracetamol followed first-order kinetics. The photoproducts and intermediates of paracetamol in the presence of nitrate ions were identified by extensive GC-MS method. The photodegradation pathways involving. OH radicals as reactive species were proposed.

  16. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  17. Analysis of psilocybin and psilocin in Psilocybe subcubensis Guzmán by ion mobility spectrometry and gas chromatography-mass spectrometry.

    PubMed

    Keller, T; Schneider, A; Regenscheit, P; Dirnhofer, R; Rücker, T; Jaspers, J; Kisser, W

    1999-01-11

    A new method has been developed for the rapid analysis of psilocybin and/or psilocin in fungus material using ion mobility spectrometry. Quantitative analysis was performed by gas chromatography-mass spectrometry after a simple one-step extraction involving homogenization of the dried fruit bodies of fungi in chloroform and derivatization with MSTFA. The proposed methods resulted in rapid procedures useful in analyzing psychotropic fungi for psilocybin and psilocin.

  18. Photodegradation of Paracetamol in Nitrate Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng Cui; Qu Ruijuan; Liang Jinyan

    2010-11-24

    The photodegradation of paracetamol in nitrate solution under simulated solar irradiation has been investigated. The degradation rates were compared by varying environmental parameters including concentrations of nitrate ion, humic substance and pH values. The quantifications of paracetamol were conducted by HPLC method. The results demonstrate that the photodegradation of paracetamol followed first-order kinetics. The photoproducts and intermediates of paracetamol in the presence of nitrate ions were identified by extensive GC-MS method. The photodegradation pathways involving. OH radicals as reactive species were proposed.

  19. Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.

    PubMed

    Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei

    2014-01-01

    Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.

  20. Elimination of initial stress-induced curvature in a micromachined bi-material composite-layered cantilever

    NASA Astrophysics Data System (ADS)

    Liu, Ruiwen; Jiao, Binbin; Kong, Yanmei; Li, Zhigang; Shang, Haiping; Lu, Dike; Gao, Chaoqun; Chen, Dapeng

    2013-09-01

    Micro-devices with a bi-material-cantilever (BMC) commonly suffer initial curvature due to the mismatch of residual stress. Traditional corrective methods to reduce the residual stress mismatch generally involve the development of different material deposition recipes. In this paper, a new method for reducing residual stress mismatch in a BMC is proposed based on various previously developed deposition recipes. An initial material film is deposited using two or more developed deposition recipes. This first film is designed to introduce a stepped stress gradient, which is then balanced by overlapping a second material film on the first and using appropriate deposition recipes to form a nearly stress-balanced structure. A theoretical model is proposed based on both the moment balance principle and total equal strain at the interface of two adjacent layers. Experimental results and analytical models suggest that the proposed method is effective in producing multi-layer micro cantilevers that display balanced residual stresses. The method provides a generic solution to the problem of mismatched initial stresses which universally exists in micro-electro-mechanical systems (MEMS) devices based on a BMC. Moreover, the method can be incorporated into a MEMS design automation package for efficient design of various multiple material layer devices from MEMS material library and developed deposition recipes.

  1. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges

    PubMed Central

    Lee, Jaebeom; Lee, Young-Joo

    2018-01-01

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421

  2. Long-Term Deflection Prediction from Computer Vision-Measured Data History for High-Speed Railway Bridges.

    PubMed

    Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo

    2018-05-09

    Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.

  3. Lagrangian numerical methods for ocean biogeochemical simulations

    NASA Astrophysics Data System (ADS)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  4. Inverse Abbe-method for observing small refractive index changes in liquids.

    PubMed

    Räty, Jukka; Peiponen, Kai-Erik

    2015-05-01

    This study concerns an optical method for the detection of minuscule refractive index changes in the liquid phase. The proposed method reverses the operation of the traditional Abbe refractometer and thus utilizes the light dispersion properties of materials, i.e. it involves the dependence of the refractive index on light wavelength. In practice, the method includes the detection of light reflection spectra in the visible spectral range. This inverse Abbe method is suitable for liquid quality studies e.g. for monitoring water purity. Tests have shown that the method reveals less than per mil NaCl or ethanol concentrations in water. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Service quality benchmarking via a novel approach based on fuzzy ELECTRE III and IPA: an empirical case involving the Italian public healthcare context.

    PubMed

    La Fata, Concetta Manuela; Lupo, Toni; Piazza, Tommaso

    2017-11-21

    A novel fuzzy-based approach which combines ELECTRE III along with the Importance-Performance Analysis (IPA) is proposed in the present work to comparatively evaluate the service quality in the public healthcare context. Specifically, ELECTRE III is firstly considered to compare the service performance of examined hospitals in a noncompensatory manner. Afterwards, IPA is employed to support the service quality management to point out improvement needs and their priorities. The proposed approach also incorporates features of the Fuzzy Set Theory so as to address the possible uncertainty, subjectivity and vagueness of involved experts in evaluating the service quality. The model is applied to five major Sicilian public hospitals, and strengths and criticalities of the delivered service are finally highlighted and discussed. Although several approaches combining multi-criteria methods have already been proposed in the literature to evaluate the service performance in the healthcare field, to the best of the authors' knowledge the present work represents the first attempt at comparing service performance of alternatives in a noncompensatory manner in the investigated context.

  6. 45 CFR 46.118 - Applications and proposals lacking definite plans for involvement of human subjects.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Applications and proposals lacking definite plans for involvement of human subjects. 46.118 Section 46.118 Public Welfare Department of Health and Human... Research Subjects § 46.118 Applications and proposals lacking definite plans for involvement of human...

  7. 45 CFR 46.118 - Applications and proposals lacking definite plans for involvement of human subjects.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Applications and proposals lacking definite plans for involvement of human subjects. 46.118 Section 46.118 Public Welfare DEPARTMENT OF HEALTH AND HUMAN... Research Subjects § 46.118 Applications and proposals lacking definite plans for involvement of human...

  8. 45 CFR 46.118 - Applications and proposals lacking definite plans for involvement of human subjects.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Applications and proposals lacking definite plans for involvement of human subjects. 46.118 Section 46.118 Public Welfare DEPARTMENT OF HEALTH AND HUMAN... Research Subjects § 46.118 Applications and proposals lacking definite plans for involvement of human...

  9. 45 CFR 46.118 - Applications and proposals lacking definite plans for involvement of human subjects.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Applications and proposals lacking definite plans for involvement of human subjects. 46.118 Section 46.118 Public Welfare DEPARTMENT OF HEALTH AND HUMAN... Research Subjects § 46.118 Applications and proposals lacking definite plans for involvement of human...

  10. Detection of Road Markings Recorded in In-Vehicle Camera Images by Using Position-Dependent Classifiers

    NASA Astrophysics Data System (ADS)

    Noda, Masafumi; Takahashi, Tomokazu; Deguchi, Daisuke; Ide, Ichiro; Murase, Hiroshi; Kojima, Yoshiko; Naito, Takashi

    In this study, we propose a method for detecting road markings recorded in an image captured by an in-vehicle camera by using a position-dependent classifier. Road markings are symbols painted on the road surface that help in preventing traffic accidents and in ensuring traffic smooth. Therefore, driver support systems for detecting road markings, such as a system that provides warning in the case when traffic signs are overlooked, and supporting the stopping of a vehicle are required. It is difficult to detect road markings because their appearance changes with the actual traffic conditions, e. g. the shape and resolution change. The variation in these appearances depend on the positional relation between the vehicle and the road markings, and on the vehicle posture. Although these variations are quite large in an entire image, they are relatively small in a local area of the image. Therefore, we try to improve the detection performance by taking into account the local variations in these appearances. We propose a method in which a position-dependent classifier is used to detect road markings recorded in images captured by an in-vehicle camera. Further, to train the classifier efficiently, we propose a generative learning method that takes into consideration the positional relation between the vehicle and road markings, and also the vehicle posture. Experimental results showed that the detection performance when the proposed method was used was better than when a method involving a single classifier was used.

  11. Leveling data in geochemical mapping: scope of application, pros and cons of existing methods

    NASA Astrophysics Data System (ADS)

    Pereira, Benoît; Vandeuren, Aubry; Sonnet, Philippe

    2017-04-01

    Geochemical mapping successfully met a range of needs from mineral exploration to environmental management. In Europe and around the world numerous geochemical datasets already exist. These datasets may originate from geochemical mapping projects or from the collection of sample analyses requested by environmental protection regulatory bodies. Combining datasets can be highly beneficial for establishing geochemical maps with increased resolution and/or coverage area. However this practice requires assessing the equivalence between datasets and, if needed, applying data leveling to remove possible biases between datasets. In the literature, several procedures for assessing dataset equivalence and leveling data are proposed. Daneshfar & Cameron (1998) proposed a method for the leveling of two adjacent datasets while Pereira et al. (2016) proposed two methods for the leveling of datasets that contain records located within the same geographical area. Each discussed method requires its own set of assumptions (underlying populations of data, spatial distribution of data, etc.). Here we propose to discuss the scope of application, pros, cons and practical recommendations for each method. This work is illustrated with several case studies in Wallonia (Southern Belgium) and in Europe involving trace element geochemical datasets. References: Daneshfar, B. & Cameron, E. (1998), Leveling geochemical data between map sheets, Journal of Geochemical Exploration 63(3), 189-201. Pereira, B.; Vandeuren, A.; Govaerts, B. B. & Sonnet, P. (2016), Assessing dataset equivalence and leveling data in geochemical mapping, Journal of Geochemical Exploration 168, 36-48.

  12. Formally exact integral equation theory of the exchange-only potential in density functional theory: Refined closure approximation

    NASA Astrophysics Data System (ADS)

    March, N. H.; Nagy, Á.

    A fonnally exact integral equation theory for the exchange-only potential Vx(r) in density functional theory was recently set up by Howard and March [I.A. Howard, N.H. March, J. Chem. Phys. 119 (2003) 5789]. It involved a `closure' function P(r) satisfying the exact sum rule ∫ P(r) dr = 0. The simplest choice P(r) = 0 recovers then the approximation proposed by Della Sala and Görling [F. Della Sala, A. Görling, J. Chem. Phys. 115 (2001) 5718] and by Gritsenko and Baerends [O.V. Gritsenko, E.J. Baerends, Phys. Rev. A 64 (2001) 042506]. Here, refined choices of P(r) are proposed, the most direct being based on the KLI (Krieger-Li-Iafrate) approximation. A further choice given some attention is where P(r) involves frontier orbital properties. In particular, the introduction of the LUMO (lowest unoccupied molecular) orbital, along with the energy separation between HOMO (highest occupied molecular orbital) and LUMO levels, should prove a significant step beyond current approximations to the optimized potential method, all of which involve only single-particle occupied orbitals.

  13. Design and analysis of multiple diseases genome-wide association studies without controls.

    PubMed

    Chen, Zhongxue; Huang, Hanwen; Ng, Hon Keung Tony

    2012-11-15

    In genome-wide association studies (GWAS), multiple diseases with shared controls is one of the case-control study designs. If data obtained from these studies are appropriately analyzed, this design can have several advantages such as improving statistical power in detecting associations and reducing the time and cost in the data collection process. In this paper, we propose a study design for GWAS which involves multiple diseases but without controls. We also propose corresponding statistical data analysis strategy for GWAS with multiple diseases but no controls. Through a simulation study, we show that the statistical association test with the proposed study design is more powerful than the test with single disease sharing common controls, and it has comparable power to the overall test based on the whole dataset including the controls. We also apply the proposed method to a real GWAS dataset to illustrate the methodologies and the advantages of the proposed design. Some possible limitations of this study design and testing method and their solutions are also discussed. Our findings indicate that the proposed study design and statistical analysis strategy could be more efficient than the usual case-control GWAS as well as those with shared controls. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Hybrid particle-field molecular dynamics simulation for polyelectrolyte systems.

    PubMed

    Zhu, You-Liang; Lu, Zhong-Yuan; Milano, Giuseppe; Shi, An-Chang; Sun, Zhao-Yan

    2016-04-14

    To achieve simulations on large spatial and temporal scales with high molecular chemical specificity, a hybrid particle-field method was proposed recently. This method is developed by combining molecular dynamics and self-consistent field theory (MD-SCF). The MD-SCF method has been validated by successfully predicting the experimentally observable properties of several systems. Here we propose an efficient scheme for the inclusion of electrostatic interactions in the MD-SCF framework. In this scheme, charged molecules are interacting with the external fields that are self-consistently determined from the charge densities. This method is validated by comparing the structural properties of polyelectrolytes in solution obtained from the MD-SCF and particle-based simulations. Moreover, taking PMMA-b-PEO and LiCF3SO3 as examples, the enhancement of immiscibility between the ion-dissolving block and the inert block by doping lithium salts into the copolymer is examined by using the MD-SCF method. By employing GPU-acceleration, the high performance of the MD-SCF method with explicit treatment of electrostatics facilitates the simulation study of many problems involving polyelectrolytes.

  15. Report: Unsupervised identification of malaria parasites using computer vision.

    PubMed

    Khan, Najeed Ahmed; Pervaz, Hassan; Latif, Arsalan; Musharaff, Ayesha

    2017-01-01

    Malaria in human is a serious and fatal tropical disease. This disease results from Anopheles mosquitoes that are infected by Plasmodium species. The clinical diagnosis of malaria based on the history, symptoms and clinical findings must always be confirmed by laboratory diagnosis. Laboratory diagnosis of malaria involves identification of malaria parasite or its antigen / products in the blood of the patient. Manual diagnosis of malaria parasite by the pathologists has proven to become cumbersome. Therefore, there is a need of automatic, efficient and accurate identification of malaria parasite. In this paper, we proposed a computer vision based approach to identify the malaria parasite from light microscopy images. This research deals with the challenges involved in the automatic detection of malaria parasite tissues. Our proposed method is based on the pixel-based approach. We used K-means clustering (unsupervised approach) for the segmentation to identify malaria parasite tissues.

  16. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  17. Consumer involvement in the health technology assessment program.

    PubMed

    Royle, Jane; Oliver, Sandy

    2004-01-01

    This study aims to describe a cycle of development leading to sustainable methods for involving consumers in the management of a program commissioning health technology assessment. Staff time was dedicated to developing procedures for recruiting and briefing consumers to participate in prioritizing, commissioning, and reporting research. Resources and support were developed in light of early feedback from consumers and those working with them. These were piloted and amended before being used routinely. Over 4 years, procedures and resources have been developed to support six consumers attending seven to eight prioritization meetings a year; thirty to forty-five consumers each year commenting on research need for particular topics; thirty consumers a year commenting on research proposals, and twenty a year commenting on research reports. The procedures include clear job descriptions, induction and development days, clear briefing materials, payment for substantial tasks, and regularly seeking feedback to improve procedures. Explicit, inclusive, and reproducible methods for supporting consumer involvement that satisfy National Health Service policy recommendations for involving consumers in research require dedicated staff time to support a cycle of organizational development.

  18. Mediating the Cognitive Walkthrough with Patient Groups to achieve Personalized Health in Chronic Disease Self-Management System Evaluation.

    PubMed

    Georgsson, Mattias; Kushniruk, Andre

    2016-01-01

    The cognitive walkthrough (CW) is a task-based, expert inspection usability evaluation method involving benefits such as cost effectiveness and efficiency. A drawback of the method is that it doesn't involve the user perspective from real users but instead is based on experts' predictions about the usability of the system and how users interact. In this paper, we propose a way of involving the user in an expert evaluation method by modifying the CW with patient groups as mediators. This along with other modifications include a dual domain session facilitator, specific patient groups and three different phases: 1) a preparation phase where suitable tasks are developed by a panel of experts and patients, validated through the content validity index 2) a patient user evaluation phase including an individual and collaborative process part 3) an analysis and coding phase where all data is digitalized and synthesized making use of Qualitative Data Analysis Software (QDAS) to determine usability deficiencies. We predict that this way of evaluating will utilize the benefits of the expert methods, also providing a way of including the patient user of these self-management systems. Results from this prospective study should provide evidence of the usefulness of this method modification.

  19. An EEG-based functional connectivity measure for automatic detection of alcohol use disorder.

    PubMed

    Mumtaz, Wajid; Saad, Mohamad Naufal B Mohamad; Kamel, Nidal; Ali, Syed Saad Azhar; Malik, Aamir Saeed

    2018-01-01

    The abnormal alcohol consumption could cause toxicity and could alter the human brain's structure and function, termed as alcohol used disorder (AUD). Unfortunately, the conventional screening methods for AUD patients are subjective and manual. Hence, to perform automatic screening of AUD patients, objective methods are needed. The electroencephalographic (EEG) data have been utilized to study the differences of brain signals between alcoholics and healthy controls that could further developed as an automatic screening tool for alcoholics. In this work, resting-state EEG-derived features were utilized as input data to the proposed feature selection and classification method. The aim was to perform automatic classification of AUD patients and healthy controls. The validation of the proposed method involved real-EEG data acquired from 30 AUD patients and 30 age-matched healthy controls. The resting-state EEG-derived features such as synchronization likelihood (SL) were computed involving 19 scalp locations resulted into 513 features. Furthermore, the features were rank-ordered to select the most discriminant features involving a rank-based feature selection method according to a criterion, i.e., receiver operating characteristics (ROC). Consequently, a reduced set of most discriminant features was identified and utilized further during classification of AUD patients and healthy controls. In this study, three different classification models such as Support Vector Machine (SVM), Naïve Bayesian (NB), and Logistic Regression (LR) were used. The study resulted into SVM classification accuracy=98%, sensitivity=99.9%, specificity=95%, and f-measure=0.97; LR classification accuracy=91.7%, sensitivity=86.66%, specificity=96.6%, and f-measure=0.90; NB classification accuracy=93.6%, sensitivity=100%, specificity=87.9%, and f-measure=0.95. The SL features could be utilized as objective markers to screen the AUD patients and healthy controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  1. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  2. Determination of free and deconjugated testosterone and epitestosterone in urine using SPME and LC-MS/MS.

    PubMed

    Zhan, Yanwei; Musteata, Florin M; Basset, Fabien A; Pawliszyn, Janusz

    2011-01-01

    A thin sheet of polydimethylsilosane membrane was used as an extraction phase for solid-phase microextraction. Compared with fiber or rod solid-phase microextraction geometries, the thin film exhibited much higher extraction capacity without sacrificing extraction time due to its higher area-to-volume ratio. The analytical method involved direct extraction of unconjugated testosterone (T) and epitestosterone (ET) followed by separation on a C18 column and detection by selected reaction monitoring in positive ionization mode. The limit of detection was 1 ng/l for both T and ET. After method validation, free (unconjugated) T and ET were extracted and quantified in real samples. Since T and ET are extensively metabolized, the proposed method was also applied to extract the steroids after enzymatic deconjugation of urinary-excreted steroid glucuronides. The proposed method allows quantification of both conjugated and unconjugated steroids, and revealed that there was a change in the ratio of T to ET after enzymatic deconjugation, indicating different rates of metabolism.

  3. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  4. B2B collaboration method through trust values for e-supply chain integrator: a case study of Malaysian construction industry

    NASA Astrophysics Data System (ADS)

    Ab. Aziz, Norshakirah; Ahmad, Rohiza; Dhanapal Durai, Dominic

    2011-12-01

    Limited trust, cooperation and communication have been identified as some of the issues that hinder collaboration among business partners. These one also true in the acceptance of e-supply chain integrator among organizations that involve in the same industry. On top of that, the huge number of components in supply chain industry also makes it impossible to include entire supply chain components in the integrator. Hence, this study intends to propose a method for identifying "trusted" collaborators for inclusion into an e-supply chain integrator. For the purpose of constructing and validating the method, the Malaysian construction industry is chosen as the case study due to its size and importance to the economy. This paper puts forward the background of the research, some relevant literatures which lead to trust values elements formulation, data collection from Malaysian Construction Supply Chain and a glimpse of the proposed method for trusted partner selection. Future work is also presented to highlight the next step of this research.

  5. A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Liu, Ji-Gou

    2018-07-01

    This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.

  6. Integration of QFD, AHP, and LPP methods in supplier development problems under uncertainty

    NASA Astrophysics Data System (ADS)

    Shad, Zahra; Roghanian, Emad; Mojibian, Fatemeh

    2014-04-01

    Quality function deployment (QFD) is a customer-driven approach, widely used to develop or process new product to maximize customer satisfaction. Last researches used linear physical programming (LPP) procedure to optimize QFD; however, QFD issue involved uncertainties, or fuzziness, which requires taking them into account for more realistic study. In this paper, a set of fuzzy data is used to address linguistic values parameterized by triangular fuzzy numbers. Proposed integrated approach including analytic hierarchy process (AHP), QFD, and LPP to maximize overall customer satisfaction under uncertain conditions and apply them in the supplier development problem. The fuzzy AHP approach is adopted as a powerful method to obtain the relationship between the customer requirements and engineering characteristics (ECs) to construct house of quality in QFD method. LPP is used to obtain the optimal achievement level of the ECs and subsequently the customer satisfaction level under different degrees of uncertainty. The effectiveness of proposed method will be illustrated by an example.

  7. Colorimetric characterization of digital cameras with unrestricted capture settings applicable for different illumination circumstances

    NASA Astrophysics Data System (ADS)

    Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin

    2016-05-01

    With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.

  8. A rapid low-cost high-density DNA-based multi-detection test for routine inspection of meat species.

    PubMed

    Lin, Chun Chi; Fung, Lai Ling; Chan, Po Kwok; Lee, Cheuk Man; Chow, Kwok Fai; Cheng, Shuk Han

    2014-02-01

    The increasing occurrence of food frauds suggests that species identification should be part of food authentication. Current molecular-based species identification methods have their own limitations or drawbacks, such as relatively time-consuming experimental steps, expensive equipment and, in particular, these methods cannot identify mixed species in a single experiment. This project proposes an improved method involving PCR amplification of the COI gene and detection of species-specific sequences by hybridisation. Major innovative breakthrough lies in the detection of multiple species, including pork, beef, lamb, horse, cat, dog and mouse, from a mixed sample within a single experiment. The probes used are species-specific either in sole or mixed species samples. As little as 5 pg of DNA template in the PCR is detectable in the proposed method. By designing species-specific probes and adopting reverse dot blot hybridisation and flow-through hybridisation, a low-cost high-density DNA-based multi-detection test suitable for routine inspection of meat species was developed. © 2013.

  9. Combining evidence using likelihood ratios in writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory

    2013-01-01

    Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.

  10. Pyrocatechol violet in pharmaceutical analysis. Part I. A spectrophotometric method for the determination of some beta-lactam antibiotics in pure and in pharmaceutical dosage forms.

    PubMed

    Amin, A S

    2001-03-01

    A fairly sensitive, simple and rapid spectrophotometric method for the determination of some beta-lactam antibiotics, namely ampicillin (Amp), amoxycillin (Amox), 6-aminopenicillanic acid (6APA), cloxacillin (Clox), dicloxacillin (Diclox) and flucloxacillin sodium (Fluclox) in bulk samples and in pharmaceutical dosage forms is described. The proposed method involves the use of pyrocatechol violet as a chromogenic reagent. These drugs produce a reddish brown coloured ion pair with absorption maximum at 604, 641, 645, 604, 649 and 641 nm for Amp, Amox, 6APA, Clox, Diclox and Flucolx, respectively. The colours produced obey Beer's law and are suitable for the quantitative determination of the named compounds. The optimization of different experimental conditions is described. The molar ratio of the ion pairs was established and a proposal for the reaction pathway is given. The procedure described was applied successfully to determine the examined drugs in dosage forms and the results obtained were comparable to those obtained with the official methods.

  11. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  12. Broadband photonic transport between waveguides by adiabatic elimination

    NASA Astrophysics Data System (ADS)

    Oukraou, Hassan; Coda, Virginie; Rangelov, Andon A.; Montemezzani, Germano

    2018-02-01

    We propose an adiabatic method for the robust transfer of light between the two outer waveguides in a three-waveguide directional coupler. Unlike the established technique inherited from stimulated Raman adiabatic passage (STIRAP), the method proposed here is symmetric with respect to an exchange of the left and right waveguides in the structure and permits the transfer in both directions. The technique uses the adiabatic elimination of the middle waveguide together with level crossing and adiabatic passage in an effective two-state system involving only the external waveguides. It requires a strong detuning between the outer and the middle waveguide and does not rely on the adiabatic transfer state (dark state) underlying the STIRAP process. The suggested technique is generalized to an array of N waveguides and verified by numerical beam propagation calculations.

  13. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    PubMed

    Suleimanov, Yury V; Green, William H

    2015-09-08

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  14. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  15. The Bassi Rebay 1 scheme is a special case of the Symmetric Interior Penalty formulation for discontinuous Galerkin discretisations with Gauss-Lobatto points

    NASA Astrophysics Data System (ADS)

    Manzanero, Juan; Rueda-Ramírez, Andrés M.; Rubio, Gonzalo; Ferrer, Esteban

    2018-06-01

    In the discontinuous Galerkin (DG) community, several formulations have been proposed to solve PDEs involving second-order spatial derivatives (e.g. elliptic problems). In this paper, we show that, when the discretisation is restricted to the usage of Gauss-Lobatto points, there are important similarities between two common choices: the Bassi-Rebay 1 (BR1) method, and the Symmetric Interior Penalty (SIP) formulation. This equivalence enables the extrapolation of properties from one scheme to the other: a sharper estimation of the minimum penalty parameter for the SIP stability (compared to the more general estimate proposed by Shahbazi [1]), more efficient implementations of the BR1 scheme, and the compactness of the BR1 method for straight quadrilateral and hexahedral meshes.

  16. A robust high resolution reversed-phase HPLC strategy to investigate various metabolic species in different biological models.

    PubMed

    D'Alessandro, Angelo; Gevi, Federica; Zolla, Lello

    2011-04-01

    Recent advancements in the field of omics sciences have paved the way for further expansion of metabolomics. Originally tied to NMR spectroscopy, metabolomic disciplines are constantly and growingly involving HPLC and mass spectrometry (MS)-based analytical strategies and, in this context, we hereby propose a robust and efficient extraction protocol for metabolites from four different biological sources which are subsequently analysed, identified and quantified through high resolution reversed-phase fast HPLC and mass spectrometry. To this end, we demonstrate the elevated intra- and inter-day technical reproducibility, ease of an MRM-based MS method, allowing simultaneous detection of up to 10 distinct features, and robustness of multiple metabolite detection and quantification in four different biological samples. This strategy might become routinely applicable to various samples/biological matrices, especially for low-availability ones. In parallel, we compare the present strategy for targeted detection of a representative metabolite, L-glutamic acid, with our previously-proposed chemical-derivatization through dansyl chloride. A direct comparison of the present method against spectrophotometric assays is proposed as well. An application of the proposed method is also introduced, using the SAOS-2 cell line, either induced or non-induced to express the TAp63 isoform of the p63 gene, as a model for determination of variations of glutamate concentrations.

  17. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  18. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  19. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.

    PubMed

    Kang, Xiaomin; Huang, Baoqi; Qi, Guodong

    2018-01-19

    Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  20. Public involvement at the design stage of primary health research: a narrative review of case examples.

    PubMed

    Boote, Jonathan; Baird, Wendy; Beecroft, Claire

    2010-04-01

    To review published examples of public involvement in research design, to synthesise the contributions made by members of the public, as well as the identified barriers, tensions and facilitating strategies. Systematic literature search and narrative review. Seven papers were identified covering the following topics: breast-feeding, antiretroviral and nutrition interventions; paediatric resuscitation; exercise and cognitive behavioural therapy; hormone replacement therapy and breast cancer; stroke; and parents' experiences of having a pre-term baby. Six papers reported public involvement in the development of a clinical trial, while one reported public involvement in the development of a mixed methods study. Group meetings were the most common method of public involvement. Contributions that members of the public made to research design were: review of consent procedures and patient information sheets; outcome suggestions; review of acceptability of data collection procedures; and recommendations on the timing of potential participants into the study and the timing of follow-up. Numerous barriers, tensions and facilitating strategies were identified. The issues raised here should assist researchers in developing research proposals with members of the public. Substantive and methodological directions for further research on the impact of public involvement in research design are set out. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  1. Group decision-making approach for flood vulnerability identification using the fuzzy VIKOR method

    NASA Astrophysics Data System (ADS)

    Lee, G.; Jun, K. S.; Cung, E. S.

    2014-09-01

    This study proposes an improved group decision making (GDM) framework that combines VIKOR method with fuzzified data to quantify the spatial flood vulnerability including multi-criteria evaluation indicators. In general, GDM method is an effective tool for formulating a compromise solution that involves various decision makers since various stakeholders may have different perspectives on their flood risk/vulnerability management responses. The GDM approach is designed to achieve consensus building that reflects the viewpoints of each participant. The fuzzy VIKOR method was developed to solve multi-criteria decision making (MCDM) problems with conflicting and noncommensurable criteria. This comprising method can be used to obtain a nearly ideal solution according to all established criteria. Triangular fuzzy numbers are used to consider the uncertainty of weights and the crisp data of proxy variables. This approach can effectively propose some compromising decisions by combining the GDM method and fuzzy VIKOR method. The spatial flood vulnerability of the south Han River using the GDM approach combined with the fuzzy VIKOR method was compared with the results from general MCDM methods, such as the fuzzy TOPSIS and classical GDM methods, such as those developed by Borda, Condorcet, and Copeland. The evaluated priorities were significantly dependent on the employed decision-making method. The proposed fuzzy GDM approach can reduce the uncertainty in the data confidence and weight derivation techniques. Thus, the combination of the GDM approach with the fuzzy VIKOR method can provide robust prioritization because it actively reflects the opinions of various groups and considers uncertainty in the input data.

  2. An Indirect Method for Vapor Pressure and Phase Change Enthalpy Determination by Thermogravimetry

    NASA Astrophysics Data System (ADS)

    Giani, Samuele; Riesen, Rudolf; Schawe, Jürgen E. K.

    2018-07-01

    Vapor pressure is a fundamental property of a pure substance. This property is the pressure of a compound's vapor in thermodynamic equilibrium with its condensed phase (solid or liquid). When phase equilibrium condition is met, phase coexistence of a pure substance involves a continuum interplay of vaporization or sublimation to gas and condensation back to their liquid or solid form, respectively. Thermogravimetric analysis (TGA) techniques are based on mass loss determination and are well suited for the study of such phenomena. In this work, it is shown that TGA method using a reference substance is a suitable technique for vapor pressure determination. This method is easy and fast because it involves a series of isothermal segments. In contrast to original Knudsen's approach, where the use of high vacuum is mandatory, adopting the proposed method a given experimental setup is calibrated under ambient pressure conditions. The theoretical framework of this method is based on a generalization of Langmuir equation of free evaporation: The real strength of the proposed method is the ability to determine the vapor pressure independently of the molecular mass of the vapor. A demonstration of this method has been performed using the Clausius-Clapeyron equation of state to derive the working equation. This algorithm, however, is adaptive and admits the use of other equations of state. The results of a series of experiments with organic molecules indicate that the average difference of the measured and the literature vapor pressure amounts to about 5 %. Vapor pressure determined in this study spans from few mPa up to several kPa. Once the p versus T diagram is obtained, phase transition enthalpy can additionally be calculated from the data.

  3. The Baldwin-Lomax model for separated and wake flows using the entropy envelope concept

    NASA Technical Reports Server (NTRS)

    Brock, J. S.; Ng, W. F.

    1992-01-01

    Implementation of the Baldwin-Lomax algebraic turbulence model is difficult and ambiguous within flows characterized by strong viscous-inviscid interactions and flow separations. A new method of implementation is proposed which uses an entropy envelope concept and is demonstrated to ensure the proper evaluation of modeling parameters. The method is simple, computationally fast, and applicable to both wake and boundary layer flows. The method is general, making it applicable to any turbulence model which requires the automated determination of the proper maxima of a vorticity-based function. The new method is evalulated within two test cases involving strong viscous-inviscid interaction.

  4. Classification of G-protein coupled receptors based on a rich generation of convolutional neural network, N-gram transformation and multiple sequence alignments.

    PubMed

    Li, Man; Ling, Cheng; Xu, Qi; Gao, Jingyang

    2018-02-01

    Sequence classification is crucial in predicting the function of newly discovered sequences. In recent years, the prediction of the incremental large-scale and diversity of sequences has heavily relied on the involvement of machine-learning algorithms. To improve prediction accuracy, these algorithms must confront the key challenge of extracting valuable features. In this work, we propose a feature-enhanced protein classification approach, considering the rich generation of multiple sequence alignment algorithms, N-gram probabilistic language model and the deep learning technique. The essence behind the proposed method is that if each group of sequences can be represented by one feature sequence, composed of homologous sites, there should be less loss when the sequence is rebuilt, when a more relevant sequence is added to the group. On the basis of this consideration, the prediction becomes whether a query sequence belonging to a group of sequences can be transferred to calculate the probability that the new feature sequence evolves from the original one. The proposed work focuses on the hierarchical classification of G-protein Coupled Receptors (GPCRs), which begins by extracting the feature sequences from the multiple sequence alignment results of the GPCRs sub-subfamilies. The N-gram model is then applied to construct the input vectors. Finally, these vectors are imported into a convolutional neural network to make a prediction. The experimental results elucidate that the proposed method provides significant performance improvements. The classification error rate of the proposed method is reduced by at least 4.67% (family level I) and 5.75% (family Level II), in comparison with the current state-of-the-art methods. The implementation program of the proposed work is freely available at: https://github.com/alanFchina/CNN .

  5. Process modelling for space station experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1988-01-01

    The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.

  6. Towards sound epistemological foundations of statistical methods for high-dimensional biology.

    PubMed

    Mehta, Tapan; Tanik, Murat; Allison, David B

    2004-09-01

    A sound epistemological foundation for biological inquiry comes, in part, from application of valid statistical procedures. This tenet is widely appreciated by scientists studying the new realm of high-dimensional biology, or 'omic' research, which involves multiplicity at unprecedented scales. Many papers aimed at the high-dimensional biology community describe the development or application of statistical techniques. The validity of many of these is questionable, and a shared understanding about the epistemological foundations of the statistical methods themselves seems to be lacking. Here we offer a framework in which the epistemological foundation of proposed statistical methods can be evaluated.

  7. Areal Feature Matching Based on Similarity Using Critic Method

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yu, K.

    2015-10-01

    In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.

  8. The economics of project analysis: Optimal investment criteria and methods of study

    NASA Technical Reports Server (NTRS)

    Scriven, M. C.

    1979-01-01

    Insight is provided toward the development of an optimal program for investment analysis of project proposals offering commercial potential and its components. This involves a critique of economic investment criteria viewed in relation to requirements of engineering economy analysis. An outline for a systems approach to project analysis is given Application of the Leontief input-output methodology to analysis of projects involving multiple processes and products is investigated. Effective application of elements of neoclassical economic theory to investment analysis of project components is demonstrated. Patterns of both static and dynamic activity levels are incorporated.

  9. One-Channel Surface Electromyography Decomposition for Muscle Force Estimation.

    PubMed

    Sun, Wentao; Zhu, Jinying; Jiang, Yinlai; Yokoi, Hiroshi; Huang, Qiang

    2018-01-01

    Estimating muscle force by surface electromyography (sEMG) is a non-invasive and flexible way to diagnose biomechanical diseases and control assistive devices such as prosthetic hands. To estimate muscle force using sEMG, a supervised method is commonly adopted. This requires simultaneous recording of sEMG signals and muscle force measured by additional devices to tune the variables involved. However, recording the muscle force of the lost limb of an amputee is challenging, and the supervised method has limitations in this regard. Although the unsupervised method does not require muscle force recording, it suffers from low accuracy due to a lack of reference data. To achieve accurate and easy estimation of muscle force by the unsupervised method, we propose a decomposition of one-channel sEMG signals into constituent motor unit action potentials (MUAPs) in two steps: (1) learning an orthogonal basis of sEMG signals through reconstruction independent component analysis; (2) extracting spike-like MUAPs from the basis vectors. Nine healthy subjects were recruited to evaluate the accuracy of the proposed approach in estimating muscle force of the biceps brachii. The results demonstrated that the proposed approach based on decomposed MUAPs explains more than 80% of the muscle force variability recorded at an arbitrary force level, while the conventional amplitude-based approach explains only 62.3% of this variability. With the proposed approach, we were also able to achieve grip force control of a prosthetic hand, which is one of the most important clinical applications of the unsupervised method. Experiments on two trans-radial amputees indicated that the proposed approach improves the performance of the prosthetic hand in grasping everyday objects.

  10. Communicating the wildland fire message: Influences on knowledge and attitude change in two case studies

    Treesearch

    Eric Toman; Bruce Shindler

    2006-01-01

    Current wildland fire policy calls for citizen involvement in planning and management. To be effective in their efforts to engage outside stakeholders, resource professionals need to understand citizens’ understanding and attitudes toward current practices as well as how to best communicate about proposed actions. A variety of outreach methods have been used to...

  11. Strategies for an enzyme immobilization on electrodes: Structural and electrochemical characterizations

    NASA Astrophysics Data System (ADS)

    Ganesh, V.; Muthurasu, A.

    2012-04-01

    In this paper, we propose various strategies for an enzyme immobilization on electrodes (both metal and semiconductor electrodes). In general, the proposed methodology involves two critical steps viz., (1) chemical modification of substrates using functional monolayers [Langmuir - Blodgett (LB) films and/or self-assembled monolayers (SAMs)] and (2) anchoring of a target enzyme using specific chemical and physical interactions by attacking the terminal functionality of the modified films. Basically there are three ways to immobilize an enzyme on chemically modified electrodes. First method consists of an electrostatic interaction between the enzyme and terminal functional groups present within the chemically modified films. Second and third methods involve the introduction of nanomaterials followed by an enzyme immobilization using both the physical and chemical adsorption processes. As a proof of principle, in this work we demonstrate the sensing and catalytic activity of horseradish peroxidase (HRP) anchored onto SAM modified indium tin oxide (ITO) electrodes towards hydrogen peroxide (H2O2). Structural characterization of such modified electrodes is performed using X-ray photoelectron spectroscopy (XPS), atomic force microscopy (AFM) and contact angle measurements. The binding events and the enzymatic reactions are monitored using electrochemical techniques mainly cyclic voltammetry (CV).

  12. Simultaneous Helmert transformations among multiple frames considering all relevant measurements

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Lin, Peng; Bian, Hefang; Gao, Jingxiang

    2018-03-01

    Helmert or similarity models are widely employed to relate different coordinate frames. It is often encountered in practice to transform coordinates from more than one old frame into a new one. One may perform separate Helmert transformations for each old frame. However, although each transformation is locally optimal, this is not globally optimal. Transformations among three frames, namely one new and two old, are studied as an example. Simultaneous Helmert transformations among all frames are also studied. Least-squares estimation of the transformation parameters and the coordinates in the new frame of all stations involved is performed. A functional model for the transformations among multiple frames is developed. A realistic stochastic model is followed, in which not only non-common stations are taken into consideration, but also errors in all measurements are addressed. An algorithm of iterative linearizations and estimations is derived in detail. The proposed method is globally optimal, and, perhaps more importantly, it produces a unified network of the new frame providing coordinate estimates for all involved stations and the associated covariance matrix, with the latter being consistent with the true errors of the former. Simulations are conducted, and the results validate the superiority of the proposed combined method over separate approaches.

  13. Use of Action Research in Nursing Education

    PubMed Central

    Pehler, Shelley-Rae; Stombaugh, Angela

    2016-01-01

    Purpose. The purpose of this article is to describe action research in nursing education and to propose a definition of action research for providing guidelines for research proposals and criteria for assessing potential publications for nursing higher education. Methods. The first part of this project involved a search of the literature on action research in nursing higher education from 1994 to 2013. Searches were conducted in the CINAHL and MEDLINE databases. Applying the criteria identified, 80 publications were reviewed. The second part of the project involved a literature review of action research methodology from several disciplines to assist in assessing articles in this review. Results. This article summarizes the nursing higher education literature reviewed and provides processes and content related to four topic areas in nursing higher education. The descriptions assist researchers in learning more about the complexity of both the action research process and the varied outcomes. The literature review of action research in many disciplines along with the review of action research in higher education provided a framework for developing a nursing-education-centric definition of action research. Conclusions. Although guidelines for developing action research and criteria for publication are suggested, continued development of methods for synthesizing action research is recommended. PMID:28078138

  14. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  15. Accounting for dropout in xenografted tumour efficacy studies: integrated endpoint analysis, reduced bias and better use of animals.

    PubMed

    Martin, Emma C; Aarons, Leon; Yates, James W T

    2016-07-01

    Xenograft studies are commonly used to assess the efficacy of new compounds and characterise their dose-response relationship. Analysis often involves comparing the final tumour sizes across dose groups. This can cause bias, as often in xenograft studies a tumour burden limit (TBL) is imposed for ethical reasons, leading to the animals with the largest tumours being excluded from the final analysis. This means the average tumour size, particularly in the control group, is underestimated, leading to an underestimate of the treatment effect. Four methods to account for dropout due to the TBL are proposed, which use all the available data instead of only final observations: modelling, pattern mixture models, treating dropouts as censored using the M3 method and joint modelling of tumour growth and dropout. The methods were applied to both a simulated data set and a real example. All four proposed methods led to an improvement in the estimate of treatment effect in the simulated data. The joint modelling method performed most strongly, with the censoring method also providing a good estimate of the treatment effect, but with higher uncertainty. In the real data example, the dose-response estimated using the censoring and joint modelling methods was higher than the very flat curve estimated from average final measurements. Accounting for dropout using the proposed censoring or joint modelling methods allows the treatment effect to be recovered in studies where it may have been obscured due to dropout caused by the TBL.

  16. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  17. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  18. Inference in randomized trials with death and missingness.

    PubMed

    Wang, Chenguang; Scharfstein, Daniel O; Colantuoni, Elizabeth; Girard, Timothy D; Yan, Ying

    2017-06-01

    In randomized studies involving severely ill patients, functional outcomes are often unobserved due to missed clinic visits, premature withdrawal, or death. It is well known that if these unobserved functional outcomes are not handled properly, biased treatment comparisons can be produced. In this article, we propose a procedure for comparing treatments that is based on a composite endpoint that combines information on both the functional outcome and survival. We further propose a missing data imputation scheme and sensitivity analysis strategy to handle the unobserved functional outcomes not due to death. Illustrations of the proposed method are given by analyzing data from a recent non-small cell lung cancer clinical trial and a recent trial of sedation interruption among mechanically ventilated patients. © 2016, The International Biometric Society.

  19. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  20. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  1. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  2. A Novel Hybrid Error Criterion-Based Active Control Method for on-Line Milling Vibration Suppression with Piezoelectric Actuators and Sensors

    PubMed Central

    Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin

    2016-01-01

    Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448

  3. A direct force model for Galilean invariant lattice Boltzmann simulation of fluid-particle flows

    NASA Astrophysics Data System (ADS)

    Tao, Shi; He, Qing; Chen, Baiman; Yang, Xiaoping; Huang, Simin

    The lattice Boltzmann method (LBM) has been widely used in the simulation of particulate flows involving complex moving boundaries. Due to the kinetic background of LBM, the bounce-back (BB) rule and the momentum exchange (ME) method can be easily applied to the solid boundary treatment and the evaluation of fluid-solid interaction force, respectively. However, recently it has been found that both the BB and ME schemes may violate the principle of Galilean invariance (GI). Some modified BB and ME methods have been proposed to reduce the GI error. But these remedies have been recognized subsequently to be inconsistent with Newton’s Third Law. Therefore, contrary to those corrections based on the BB and ME methods, a unified iterative approach is adopted to handle the solid boundary in the present study. Furthermore, a direct force (DF) scheme is proposed to evaluate the fluid-particle interaction force. The methods preserve the efficiency of the BB and ME schemes, and the performance on the accuracy and GI is verified and validated in the test cases of particulate flows with freely moving particles.

  4. A spectrophotometric assay method for vanadium in biological and environmental samples using 2,4-dinitrophenylhydrazine with imipramine hydrochloride.

    PubMed

    Al-Tayar, Naef Ghllab Saeed; Nagaraja, P; Vasantha, R A; Shresta, Ashwinee Kumar

    2012-01-01

    A simple, rapid, and sensitive method involving the interaction of 2,4-dinitrophenylhydrazine with imipramine hydrochloride in presence of vanadium (V) in sulfuric acid medium has been proposed for the determination of vanadium. The purple-colored product developed showed an absorption maximum at 560 nm and was stable for 24 h. The working curve was linear over the concentration range of 0.1-2.8 μg ml( - 1), with sensitivity of detection of 0.0124 μg ml( - 1). Molar absorptivity and Sandell's sensitivity were found to be 2.6 × 10(4) l/mol cm and 0.0039 μg cm( - 1), respectively. The accuracy of the proposed method was assessed by Student's t test and variance ratio F test, and the results were on par with the reported method. The method was successfully used in the determination of V in water, human urine, soil, and plant samples, and it was free from interference by various concomitant ions.

  5. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  6. Wireless and real-time structural damage detection: A novel decentralized method for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Avci, Onur; Abdeljaber, Osama; Kiranyaz, Serkan; Hussein, Mohammed; Inman, Daniel J.

    2018-06-01

    Being an alternative to conventional wired sensors, wireless sensor networks (WSNs) are extensively used in Structural Health Monitoring (SHM) applications. Most of the Structural Damage Detection (SDD) approaches available in the SHM literature are centralized as they require transferring data from all sensors within the network to a single processing unit to evaluate the structural condition. These methods are found predominantly feasible for wired SHM systems; however, transmission and synchronization of huge data sets in WSNs has been found to be arduous. As such, the application of centralized methods with WSNs has been a challenge for engineers. In this paper, the authors are presenting a novel application of 1D Convolutional Neural Networks (1D CNNs) on WSNs for SDD purposes. The SDD is successfully performed completely wireless and real-time under ambient conditions. As a result of this, a decentralized damage detection method suitable for wireless SHM systems is proposed. The proposed method is based on 1D CNNs and it involves training an individual 1D CNN for each wireless sensor in the network in a format where each CNN is assigned to process the locally-available data only, eliminating the need for data transmission and synchronization. The proposed damage detection method operates directly on the raw ambient vibration condition signals without any filtering or preprocessing. Moreover, the proposed approach requires minimal computational time and power since 1D CNNs merge both feature extraction and classification tasks into a single learning block. This ability is prevailingly cost-effective and evidently practical in WSNs considering the hardware systems have been occasionally reported to suffer from limited power supply in these networks. To display the capability and verify the success of the proposed method, large-scale experiments conducted on a laboratory structure equipped with a state-of-the-art WSN are reported.

  7. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  9. A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network

    PubMed Central

    Yang, Tao; Gao, Wei

    2018-01-01

    Fault diagnosis is critical to ensure the safety and reliable operation of rotating machinery. Most methods used in fault diagnosis of rotating machinery extract a few feature values from vibration signals for fault diagnosis, which is a dimensionality reduction from the original signal and may omit some important fault messages in the original signal. Thus, a novel diagnosis method is proposed involving the use of a convolutional neural network (CNN) to directly classify the continuous wavelet transform scalogram (CWTS), which is a time-frequency domain transform of the original signal and can contain most of the information of the vibration signals. In this method, CWTS is formed by discomposing vibration signals of rotating machinery in different scales using wavelet transform. Then the CNN is trained to diagnose faults, with CWTS as the input. A series of experiments is conducted on the rotor experiment platform using this method. The results indicate that the proposed method can diagnose the faults accurately. To verify the universality of this method, the trained CNN was also used to perform fault diagnosis for another piece of rotor equipment, and a good result was achieved. PMID:29734704

  10. A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network.

    PubMed

    Guo, Sheng; Yang, Tao; Gao, Wei; Zhang, Chen

    2018-05-04

    Fault diagnosis is critical to ensure the safety and reliable operation of rotating machinery. Most methods used in fault diagnosis of rotating machinery extract a few feature values from vibration signals for fault diagnosis, which is a dimensionality reduction from the original signal and may omit some important fault messages in the original signal. Thus, a novel diagnosis method is proposed involving the use of a convolutional neural network (CNN) to directly classify the continuous wavelet transform scalogram (CWTS), which is a time-frequency domain transform of the original signal and can contain most of the information of the vibration signals. In this method, CWTS is formed by discomposing vibration signals of rotating machinery in different scales using wavelet transform. Then the CNN is trained to diagnose faults, with CWTS as the input. A series of experiments is conducted on the rotor experiment platform using this method. The results indicate that the proposed method can diagnose the faults accurately. To verify the universality of this method, the trained CNN was also used to perform fault diagnosis for another piece of rotor equipment, and a good result was achieved.

  11. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  13. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  14. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  15. Determination of Tissue Thermal Conductivity by Measuring and Modeling Temperature Rise Induced in Tissue by Pulsed Focused Ultrasound

    PubMed Central

    Kujawska, Tamara; Secomski, Wojciech; Kruglenko, Eleonora; Krawczyk, Kazimierz; Nowicki, Andrzej

    2014-01-01

    A tissue thermal conductivity (Ks) is an important parameter which knowledge is essential whenever thermal fields induced in selected organs are predicted. The main objective of this study was to develop an alternative ultrasonic method for determining Ks of tissues in vitro suitable for living tissues. First, the method involves measuring of temperature-time T(t) rises induced in a tested tissue sample by a pulsed focused ultrasound with measured acoustic properties using thermocouples located on the acoustic beam axis. Measurements were performed for 20-cycle tone bursts with a 2 MHz frequency, 0.2 duty-cycle and 3 different initial pressures corresponding to average acoustic powers equal to 0.7 W, 1.4 W and 2.1 W generated from a circular focused transducer with a diameter of 15 mm and f-number of 1.7 in a two-layer system of media: water/beef liver. Measurement results allowed to determine position of maximum heating located inside the beef liver. It was found that this position is at the same axial distance from the source as the maximum peak-peak pressure calculated for each nonlinear beam produced in the two-layer system of media. Then, the method involves modeling of T(t) at the point of maximum heating and fitting it to the experimental data by adjusting Ks. The averaged value of Ks determined by the proposed method was found to be 0.5±0.02 W/(m·°C) being in good agreement with values determined by other methods. The proposed method is suitable for determining Ks of some animal tissues in vivo (for example a rat liver). PMID:24743838

  16. Training set expansion: an approach to improving the reconstruction of biological networks from limited and uneven reliable interactions

    PubMed Central

    Yip, Kevin Y.; Gerstein, Mark

    2009-01-01

    Motivation: An important problem in systems biology is reconstructing complete networks of interactions between biological objects by extrapolating from a few known interactions as examples. While there are many computational techniques proposed for this network reconstruction task, their accuracy is consistently limited by the small number of high-confidence examples, and the uneven distribution of these examples across the potential interaction space, with some objects having many known interactions and others few. Results: To address this issue, we propose two computational methods based on the concept of training set expansion. They work particularly effectively in conjunction with kernel approaches, which are a popular class of approaches for fusing together many disparate types of features. Both our methods are based on semi-supervised learning and involve augmenting the limited number of gold-standard training instances with carefully chosen and highly confident auxiliary examples. The first method, prediction propagation, propagates highly confident predictions of one local model to another as the auxiliary examples, thus learning from information-rich regions of the training network to help predict the information-poor regions. The second method, kernel initialization, takes the most similar and most dissimilar objects of each object in a global kernel as the auxiliary examples. Using several sets of experimentally verified protein–protein interactions from yeast, we show that training set expansion gives a measurable performance gain over a number of representative, state-of-the-art network reconstruction methods, and it can correctly identify some interactions that are ranked low by other methods due to the lack of training examples of the involved proteins. Contact: mark.gerstein@yale.edu Availability: The datasets and additional materials can be found at http://networks.gersteinlab.org/tse. PMID:19015141

  17. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  18. Detecting Inappropriate Access to Electronic Health Records Using Collaborative Filtering.

    PubMed

    Menon, Aditya Krishna; Jiang, Xiaoqian; Kim, Jihoon; Vaidya, Jaideep; Ohno-Machado, Lucila

    2014-04-01

    Many healthcare facilities enforce security on their electronic health records (EHRs) through a corrective mechanism: some staff nominally have almost unrestricted access to the records, but there is a strict ex post facto audit process for inappropriate accesses, i.e., accesses that violate the facility's security and privacy policies. This process is inefficient, as each suspicious access has to be reviewed by a security expert, and is purely retrospective, as it occurs after damage may have been incurred. This motivates automated approaches based on machine learning using historical data. Previous attempts at such a system have successfully applied supervised learning models to this end, such as SVMs and logistic regression. While providing benefits over manual auditing, these approaches ignore the identity of the users and patients involved in a record access. Therefore, they cannot exploit the fact that a patient whose record was previously involved in a violation has an increased risk of being involved in a future violation. Motivated by this, in this paper, we propose a collaborative filtering inspired approach to predicting inappropriate accesses. Our solution integrates both explicit and latent features for staff and patients, the latter acting as a personalized "finger-print" based on historical access patterns. The proposed method, when applied to real EHR access data from two tertiary hospitals and a file-access dataset from Amazon, shows not only significantly improved performance compared to existing methods, but also provides insights as to what indicates an inappropriate access.

  19. Detecting Inappropriate Access to Electronic Health Records Using Collaborative Filtering

    PubMed Central

    Menon, Aditya Krishna; Jiang, Xiaoqian; Kim, Jihoon; Vaidya, Jaideep; Ohno-Machado, Lucila

    2013-01-01

    Many healthcare facilities enforce security on their electronic health records (EHRs) through a corrective mechanism: some staff nominally have almost unrestricted access to the records, but there is a strict ex post facto audit process for inappropriate accesses, i.e., accesses that violate the facility’s security and privacy policies. This process is inefficient, as each suspicious access has to be reviewed by a security expert, and is purely retrospective, as it occurs after damage may have been incurred. This motivates automated approaches based on machine learning using historical data. Previous attempts at such a system have successfully applied supervised learning models to this end, such as SVMs and logistic regression. While providing benefits over manual auditing, these approaches ignore the identity of the users and patients involved in a record access. Therefore, they cannot exploit the fact that a patient whose record was previously involved in a violation has an increased risk of being involved in a future violation. Motivated by this, in this paper, we propose a collaborative filtering inspired approach to predicting inappropriate accesses. Our solution integrates both explicit and latent features for staff and patients, the latter acting as a personalized “finger-print” based on historical access patterns. The proposed method, when applied to real EHR access data from two tertiary hospitals and a file-access dataset from Amazon, shows not only significantly improved performance compared to existing methods, but also provides insights as to what indicates an inappropriate access. PMID:24683293

  20. On the use of haplotype phylogeny to detect disease susceptibility loci

    PubMed Central

    Bardel, Claire; Danjean, Vincent; Hugot, Jean-Pierre; Darlu, Pierre; Génin, Emmanuelle

    2005-01-01

    Background The cladistic approach proposed by Templeton has been presented as promising for the study of the genetic factors involved in common diseases. This approach allows the joint study of multiple markers within a gene by considering haplotypes and grouping them in nested clades. The idea is to search for clades with an excess of cases as compared to the whole sample and to identify the mutations defining these clades as potential candidate disease susceptibility sites. However, the performance of this approach for the study of the genetic factors involved in complex diseases has never been studied. Results In this paper, we propose a new method to perform such a cladistic analysis and we estimate its power through simulations. We show that under models where the susceptibility to the disease is caused by a single genetic variant, the cladistic test is neither really more powerful to detect an association nor really more efficient to localize the susceptibility site than an individual SNP testing. However, when two interacting sites are responsible for the disease, the cladistic analysis greatly improves the probability to find the two susceptibility sites. The impact of the linkage disequilibrium and of the tree characteristics on the efficiency of the cladistic analysis are also discussed. An application on a real data set concerning the CARD15 gene and Crohn disease shows that the method can successfully identify the three variant sites that are involved in the disease susceptibility. Conclusion The use of phylogenies to group haplotypes is especially interesting to pinpoint the sites that are likely to be involved in disease susceptibility among the different markers identified within a gene. PMID:15904492

  1. Efficient searching in meshfree methods

    NASA Astrophysics Data System (ADS)

    Olliff, James; Alford, Brad; Simkins, Daniel C.

    2018-04-01

    Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.

  2. Transform-Based Channel-Data Compression to Improve the Performance of a Real-Time GPU-Based Software Beamformer.

    PubMed

    Lok, U-Wai; Li, Pai-Chi

    2016-03-01

    Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.

  3. The forecasting of menstruation based on a state-space modeling of basal body temperature time series.

    PubMed

    Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio

    2017-09-20

    Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumway, R.H.; McQuarrie, A.D.

    Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less

  5. Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method

    NASA Astrophysics Data System (ADS)

    Zhang, Jenmy Zimi

    This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.

  6. Sparsity-driven coupled imaging and autofocusing for interferometric SAR

    NASA Astrophysics Data System (ADS)

    Zengin, Oǧuzcan; Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    We propose a sparsity-driven method for coupled image formation and autofocusing based on multi-channel data collected in interferometric synthetic aperture radar (IfSAR). Relative phase between SAR images contains valuable information. For example, it can be used to estimate the height of the scene in SAR interferometry. However, this relative phase could be degraded when independent enhancement methods are used over SAR image pairs. Previously, Ramakrishnan et al. proposed a coupled multi-channel image enhancement technique, based on a dual descent method, which exhibits better performance in phase preservation compared to independent enhancement methods. Their work involves a coupled optimization formulation that uses a sparsity enforcing penalty term as well as a constraint tying the multichannel images together to preserve the cross-channel information. In addition to independent enhancement, the relative phase between the acquisitions can be degraded due to other factors as well, such as platform location uncertainties, leading to phase errors in the data and defocusing in the formed imagery. The performance of airborne SAR systems can be affected severely by such errors. We propose an optimization formulation that combines Ramakrishnan et al.'s coupled IfSAR enhancement method with the sparsity-driven autofocus (SDA) approach of Önhon and Çetin to alleviate the effects of phase errors due to motion errors in the context of IfSAR imaging. Our method solves the joint optimization problem with a Lagrangian optimization method iteratively. In our preliminary experimental analysis, we have obtained results of our method on synthetic SAR images and compared its performance to existing methods.

  7. Transforming Multidisciplinary Customer Requirements to Product Design Specifications

    NASA Astrophysics Data System (ADS)

    Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu

    2017-09-01

    With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.

  8. Using a Mixed Model to Evaluate Job Satisfaction in High-Tech Industries.

    PubMed

    Tsai, Sang-Bing; Huang, Chih-Yao; Wang, Cheng-Kuang; Chen, Quan; Pan, Jingzhou; Wang, Ge; Wang, Jingan; Chin, Ta-Chia; Chang, Li-Chung

    2016-01-01

    R&D professionals are the impetus behind technological innovation, and their competitiveness and capability drive the growth of a company. However, high-tech industries have a chronic shortage of such indispensable professionals. Accordingly, reducing R&D personnel turnover has become a major human resource management challenge facing innovative companies. This study combined importance-performance analysis (IPA) with the decision-making trial and evaluation laboratory (DEMATEL) method to propose an IPA-DEMATEL model. Establishing this model involved three steps. First, an IPA was conducted to measure the importance of and satisfaction gained from job satisfaction criteria. Second, the DEMATEL method was used to determine the causal relationships of and interactive influence among the criteria. Third, a criteria model was constructed to evaluate job satisfaction of high-tech R&D personnel. On the basis of the findings, managerial suggestions are proposed.

  9. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  10. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  11. A Novel Approach with Time-Splitting Spectral Technique for the Coupled Schrödinger-Boussinesq Equations Involving Riesz Fractional Derivative

    NASA Astrophysics Data System (ADS)

    Saha Ray, S.

    2017-09-01

    In the present paper the Riesz fractional coupled Schrödinger-Boussinesq (S-B) equations have been solved by the time-splitting Fourier spectral (TSFS) method. This proposed technique is utilized for discretizing the Schrödinger like equation and further, a pseudospectral discretization has been employed for the Boussinesq-like equation. Apart from that an implicit finite difference approach has also been proposed to compare the results with the solutions obtained from the time-splitting technique. Furthermore, the time-splitting method is proved to be unconditionally stable. The error norms along with the graphical solutions have also been presented here. Supported by NBHM, Mumbai, under Department of Atomic Energy, Government of India vide Grant No. 2/48(7)/2015/NBHM (R.P.)/R&D II/11403

  12. A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples.

    PubMed

    Li, Yankun; Shao, Xueguang; Cai, Wensheng

    2007-04-15

    Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods.

  13. Detection of the Vibration Signal from Human Vocal Folds Using a 94-GHz Millimeter-Wave Radar

    PubMed Central

    Chen, Fuming; Li, Sheng; Zhang, Yang; Wang, Jianqi

    2017-01-01

    The detection of the vibration signal from human vocal folds provides essential information for studying human phonation and diagnosing voice disorders. Doppler radar technology has enabled the noncontact measurement of the human-vocal-fold vibration. However, existing systems must be placed in close proximity to the human throat and detailed information may be lost because of the low operating frequency. In this paper, a long-distance detection method, involving the use of a 94-GHz millimeter-wave radar sensor, is proposed for detecting the vibration signals from human vocal folds. An algorithm that combines empirical mode decomposition (EMD) and the auto-correlation function (ACF) method is proposed for detecting the signal. First, the EMD method is employed to suppress the noise of the radar-detected signal. Further, the ratio of the energy and entropy is used to detect voice activity in the radar-detected signal, following which, a short-time ACF is employed to extract the vibration signal of the human vocal folds from the processed signal. For validating the method and assessing the performance of the radar system, a vibration measurement sensor and microphone system are additionally employed for comparison. The experimental results obtained from the spectrograms, the vibration frequency of the vocal folds, and coherence analysis demonstrate that the proposed method can effectively detect the vibration of human vocal folds from a long detection distance. PMID:28282892

  14. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-04-01

    Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.

  15. Fast computation of the multivariable stability margin for real interrelated uncertain parameters

    NASA Technical Reports Server (NTRS)

    Sideris, Athanasios; Sanchez Pena, Ricardo S.

    1988-01-01

    A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.

  16. Assessing Tax Form Distribution Costs: A Proposed Method for Computing the Dollar Value of Tax Form Distribution in a Public Library.

    ERIC Educational Resources Information Center

    Casey, James B.

    1998-01-01

    Explains how a public library can compute the actual cost of distributing tax forms to the public by listing all direct and indirect costs and demonstrating the formulae and necessary computations. Supplies directions for calculating costs involved for all levels of staff as well as associated public relations efforts, space, and utility costs.…

  17. Cost and Time Analysis of Monograph Cataloging in Hospital Libraries: A Preliminary Study.

    ERIC Educational Resources Information Center

    Angold, Linda

    The purpose of this paper is: (1) to propose models to be used in evaluating relative time and cost factors involved in monograph cataloging within a hospital library, and (2) to test the models by performing a cost and time analysis of each cataloging method studied. To establish as complete a list of cataloging work units as possible, several…

  18. Impossibility of the Counterfactual Computation for All Possible Outcomes

    NASA Astrophysics Data System (ADS)

    Vaidman, L.

    2007-04-01

    A recent proposal for counterfactual computation [O. Hosten , Nature (London) 439, 949 (2006)NATUAS0028-083610.1038/nature04523] is analyzed. It is argued that the method does not provide counterfactual computation for all possible outcomes. The explanation involves a novel paradoxical feature of pre- and postselected quantum particles: The particle can reach a certain location without being on the path that leads to this location.

  19. Reducing Water/Hull Drag By Injecting Air Into Grooves

    NASA Technical Reports Server (NTRS)

    Reed, Jason C.; Bushnell, Dennis M.; Weinstein, Leonard M.

    1991-01-01

    Proposed technique for reduction of friction drag on hydrodynamic body involves use of grooves and combinations of surfactants to control motion of layer on surface of such body. Surface contains many rows of side-by-side, evenly spaced, longitudinal grooves. Dimensions of grooves and sharpnesses of tips in specific case depends on conditions of flow about vessel. Requires much less air than does microbubble-injection method.

  20. A Survey of Levels of Supervisory Support and Maintenance of Effects Reported by Educators Involved in Direct Instruction Implementations.

    ERIC Educational Resources Information Center

    Blakely, Molly Riley

    2001-01-01

    Provides results of a survey given to 150 educators across 5 school Direct Instruction implementations. Proposes that this survey is an initial step in the direction of determining teacher preferences with regard to levels of support and coaching provided in school. Finds teachers identified the team-teach method of coaching as the most effective.…

  1. The Osher scheme for real gases

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1990-01-01

    An extension of Osher's approximate Riemann solver to include gases with an arbitrary equation of state is presented. By a judicious choice of thermodynamic variables, the Riemann invariats are reduced to quadratures which are then approximated numerically. The extension is rigorous and does not involve any further assumptions or approximations over the ideal gas case. Numerical results are presented to demonstrate the feasibility and accuracy of the proposed method.

  2. Application of the Organic Synthetic Designs to Astrobiology

    NASA Astrophysics Data System (ADS)

    Kolb, V. M.

    2009-12-01

    In this paper we propose a synthesis of the heterocyclic compounds and the insoluble materials on the meteorites. Our synthetic scheme involves the reaction of sugars and amino acids, the so-called Maillard reaction. We have developed this scheme based on the combined analysis of the regular and retrosynthetic organic synthetic principles. The merits of these synthetic methods for the prebiotic design are addressed.

  3. A Proposed Methodology to Classify Frontier Capital Markets

    DTIC Science & Technology

    2011-07-31

    but because it is the surest route to our common good.” -Inaugural Speech by President Barack Obama, Jan 2009 This project involves basic...machine learning. The algorithm consists of a unique binary classifier mechanism that combines three methods: k-Nearest Neighbors ( kNN ), ensemble...Through kNN Ensemble Classification Techniques E. Capital Market Classification Based on Capital Flows and Trading Architecture F. Horizontal

  4. An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Choi, Suk; Lee, Kang Seog

    2016-04-01

    A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.

  5. Armored Enzyme Nanoparticles for Remediation of Subsurface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grate, Jay W.

    2005-09-01

    The remediation of subsurface contaminants is a critical problem for the Department of Energy, other government agencies, and our nation. Severe contamination of soil and groundwater exists at several DOE sites due to various methods of intentional and unintentional release. Given the difficulties involved in conventional removal or separation processes, it is vital to develop methods to transform contaminants and contaminated earth/water to reduce risks to human health and the environment. Transformation of the contaminants themselves may involve conversion to other immobile species that do not migrate into well water or surface waters, as is proposed for metals and radionuclides;more » or degradation to harmless molecules, as is desired for organic contaminants. Transformation of contaminated earth (as opposed to the contaminants themselves) may entail reductions in volume or release of bound contaminants for remediation.« less

  6. Development of car theft crime index in peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zulkifli, Malina; Ismail, Noriszura; Razali, Ahmad Mahir; Kasim, Maznah Mat

    2014-06-01

    Vehicle theft is classified as property crime and is considered as the most frequently reported crime in Malaysia. The rising number of vehicle thefts requires proper control by relevant authorities, especially through planning and implementation of strategic and effective measures. Nevertheless, the effort to control this crime would be much easier if there is an indication or index which is more specific to vehicle theft. This study aims to build an index crime which is specific to vehicle theft. The development of vehicle theft index proposed in this study requires three main steps; the first involves identification of criteria related to vehicle theft, the second requires calculation of degrees of importance, or weighting criteria, which involves application of correlation and entropy methods, and the final involves building of vehicle theft index using method of linear combination, or weighted arithmetic average. The results show that the two methods used for determining weights of vehicle theft index are similar. Information generated from the results can be used as a primary source for local authorities to plan strategies for reduction of vehicle theft and for insurance companies to determine premium rates of automobile insurance.

  7. Natural frequency identification of smart washer by using adaptive observer

    NASA Astrophysics Data System (ADS)

    Ito, Hitoshi; Okugawa, Masayuki

    2014-04-01

    Bolted joints are used in many machines/structures and some of them have been loosened during long time use, and unluckily these bolt loosening may cause a great accident of machines/structures system. These bolted joint, especially in important places, are main object of maintenance inspection. Maintenance inspection with human- involvement is desired to be improved owing to time-consuming, labor-intensive and high-cost. By remote and full automation monitoring of the bolt loosening, constantly monitoring of bolted joint is achieved. In order to detect loosening of bolted joints without human-involvement, applying a structural health monitoring technique and smart structures/materials concept is the key objective. In this study, a new method of bolt loosening detection by adopting a smart washer has been proposed, and the basic detection principle was discussed with numerical analysis about frequency equation of the system, was confirmed experimentally. The smart washer used in this study is in cantilever type with piezoelectric material, which adds the washer the self-sensing and actuation function. The principle used to detect the loosening of the bolts is a method of a bolt loosening detection noted that the natural frequency of a smart washer system is decreasing by the change of the bolt tightening axial tension. The feature of this proposed method is achieving to identify the natural frequency at current condition on demand by adopting the self-sensing and actuation function and system identification algorithm for varying the natural frequency depending the bolt tightening axial tension. A novel bolt loosening detection method by adopting adaptive observer is proposed in this paper. The numerical simulations are performed to verify the possibility of the adaptive observer-based loosening detection. Improvement of the detection accuracy for a bolt loosening is confirmed by adopting initial parameter and variable adaptive gain by numerical simulation.

  8. A Retrospective Review of Microbiological Methods Applied in Studies Following the Deepwater Horizon Oil Spill.

    PubMed

    Zhang, Shuangfei; Hu, Zhong; Wang, Hui

    2018-01-01

    The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico in 2010 resulted in serious damage to local marine and coastal environments. In addition to the physical removal and chemical dispersion of spilled oil, biodegradation by indigenous microorganisms was regarded as the most effective way for cleaning up residual oil. Different microbiological methods were applied to investigate the changes and responses of bacterial communities after the DWH oil spills. By summarizing and analyzing these microbiological methods, giving recommendations and proposing some methods that have not been used, this review aims to provide constructive guidelines for microbiological studies after environmental disasters, especially those involving organic pollutants.

  9. A Retrospective Review of Microbiological Methods Applied in Studies Following the Deepwater Horizon Oil Spill

    PubMed Central

    Zhang, Shuangfei; Hu, Zhong; Wang, Hui

    2018-01-01

    The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico in 2010 resulted in serious damage to local marine and coastal environments. In addition to the physical removal and chemical dispersion of spilled oil, biodegradation by indigenous microorganisms was regarded as the most effective way for cleaning up residual oil. Different microbiological methods were applied to investigate the changes and responses of bacterial communities after the DWH oil spills. By summarizing and analyzing these microbiological methods, giving recommendations and proposing some methods that have not been used, this review aims to provide constructive guidelines for microbiological studies after environmental disasters, especially those involving organic pollutants. PMID:29628913

  10. Novel method for screening of enteric film coatings properties with magnetic resonance imaging.

    PubMed

    Dorożyński, Przemysław; Jamróz, Witold; Niwiński, Krzysztof; Kurek, Mateusz; Węglarz, Władysław P; Jachowicz, Renata; Kulinowski, Piotr

    2013-11-18

    The aim of the study is to present the concept of novel method for fast screening of enteric coating compositions properties without the need of preparation of tablets batches for fluid bed coating. Proposed method involves evaluation of enteric coated model tablets in specially designed testing cell with application of MRI technique. The results obtained in the testing cell were compared with results of dissolution studies of mini-tablets coated in fluid bed apparatus. The method could be useful in early stage of formulation development for screening of film coating properties that will shorten and simplify the development works. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Fast immersed interface Poisson solver for 3D unbounded problems around arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Gillis, T.; Winckelmans, G.; Chatelain, P.

    2018-02-01

    We present a fast and efficient Fourier-based solver for the Poisson problem around an arbitrary geometry in an unbounded 3D domain. This solver merges two rewarding approaches, the lattice Green's function method and the immersed interface method, using the Sherman-Morrison-Woodbury decomposition formula. The method is intended to be second order up to the boundary. This is verified on two potential flow benchmarks. We also further analyse the iterative process and the convergence behavior of the proposed algorithm. The method is applicable to a wide range of problems involving a Poisson equation around inner bodies, which goes well beyond the present validation on potential flows.

  12. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  13. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  14. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.

    PubMed

    Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar

    2017-03-01

    We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.

  15. Finding a [open quotes]proposal[close quotes] for major federal action consistent with the purposes of NEPA: Does Blue Ocean Preservation Society v. Watkins breathe new life into the law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, V.

    Blue Ocean Preservation Society vs. Watkins (BOPS), the subject of this casenote, exemplifies the need for consideration of the environmental consequences of an agency action. The geothermal energy project which was the subject of that litigation involved sensitive, highly controversial and cultural concerns. BOPS illustrates the most compelling case for requiring preparation of an EIS before the decision to undertake agency action. This article addresses the question of what constitutes a [open quotes]proposal[close quotes] for purposes of compelling federal agencies to prepare EIS's pursuant to NEPA. Specifically, this casenote examines the method by which the Court found a proposal formore » major federal action in BOPS.« less

  16. Capitation pricing: Adjusting for prior utilization and physician discretion

    PubMed Central

    Anderson, Gerard F.; Cantor, Joel C.; Steinberg, Earl P.; Holloway, James

    1986-01-01

    As the number of Medicare beneficiaries receiving care under at-risk capitation arrangements increases, the method for setting payment rates will come under increasing scrutiny. A number of modifications to the current adjusted average per capita cost (AAPCC) methodology have been proposed, including an adjustment for prior utilization. In this article, we propose use of a utilization adjustment that includes only hospitalizations involving low or moderate physician discretion in the decision to hospitalize. This modification avoids discrimination against capitated systems that prevent certain discretionary admissions. The model also explains more of the variance in per capita expenditures than does the current AAPCC. PMID:10312010

  17. Laser Powered Launch Vehicle Performance Analyses

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Wang, Ten-See (Technical Monitor)

    2001-01-01

    The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Successful predictions of the performance of laser powered launch vehicle concepts depend on the sophisticate models that reflects the underlying flow physics including the laser ray tracing the focusing, inverse Bremsstrahlung (IB) effects, finite-rate air chemistry, thermal non-equilibrium, plasma radiation and detonation wave propagation, etc. The proposed work will extend the base-line numerical model to an efficient design analysis tool. The proposed model is suitable for 3-D analysis using parallel computing methods.

  18. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  19. Analytical method development for the determination of emerging contaminants in water using supercritical-fluid chromatography coupled with diode-array detection.

    PubMed

    Del Carmen Salvatierra-Stamp, Vilma; Ceballos-Magaña, Silvia G; Gonzalez, Jorge; Ibarra-Galván, Valentin; Muñiz-Valencia, Roberto

    2015-05-01

    An analytical method using supercritical-fluid chromatography coupled with diode-array detection for the determination of seven emerging contaminants-two pharmaceuticals (carbamazepine and glyburide), three endocrine disruptors (17α-ethinyl estradiol, bisphenol A, and 17β-estradiol), one bactericide (triclosan), and one pesticide (diuron)-was developed and validated. These contaminants were chosen because of their frequency of use and their toxic effects on both humans and the environment. The optimized chromatographic separation on a Viridis BEH 2-EP column achieved baseline resolution for all compounds in less than 10 min. This separation was applied to environmental water samples after sample preparation. The optimized sample treatment involved a preconcentration step by means of solid-phase extraction using C18-OH cartridges. The proposed method was validated, finding recoveries higher than 94 % and limits of detection and limits of quantification in the range of 0.10-1.59 μg L(-1) and 0.31-4.83 μg L(-1), respectively. Method validation established the proposed method to be selective, linear, accurate, and precise. Finally, the method was successfully applied to environmental water samples.

  20. Collaborative simulation method with spatiotemporal synchronization process control

    NASA Astrophysics Data System (ADS)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  1. Investigation of the HLA component involved in rheumatoid arthritis (RA) by using the marker association-segregation [chi][sup 2] (MASC) method: Rejection of the unifying-shared-epitope hypothesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dizier, M.H.; Eliaou, J.F.; Babron, M.C.

    In order to investigate the HLA component involved in rheumatoid arthritis (RA), the authors tested genetic models by the marker association-segregation [chi][sup 2] (MASC) method, using the HLA genotypic distribution observed in a sample of 97 RA patients. First they tested models assuming the involvement of a susceptibility gene linked to the DR locus. They showed that the present data are compatible with a simple model assuming the effect of a recessive allele of a biallelic locus linked to the DR locus and without any assumption of synergistic effect. Then they considered models assuming the direct involvement of the DRmore » allele products, and tested the unifying-shared-epitope hypothesis, which has been proposed. Under this hypothesis the DR alleles are assumed to be directly involved in the susceptibility to the disease because of the presence of similar or identical amino acid sequences in position 70-74 of the third hypervariable region of the DRBI molecules, shared by the RA-associated DR alleles DR4Dw4, DR4Dw14, and DR1. This hypothesis was strongly rejected with the present data. In the case of the direct involvement of the DR alleles, hypotheses more complex that the unifying-shared-epitope hypothesis would have to be considered. 28 refs., 2 tabs.« less

  2. Automatic Generation of Supervisory Control System Software Using Graph Composition

    NASA Astrophysics Data System (ADS)

    Nakata, Hideo; Sano, Tatsuro; Kojima, Taizo; Seo, Kazuo; Uchida, Tomoyuki; Nakamura, Yasuaki

    This paper describes the automatic generation of system descriptions for SCADA (Supervisory Control And Data Acquisition) systems. The proposed method produces various types of data and programs for SCADA systems from equipment definitions using conversion rules. At first, this method makes directed graphs, which represent connections between the equipment, from equipment definitions. System descriptions are generated using the conversion rules, by analyzing these directed graphs, and finding the groups of equipment that involve similar operations. This method can make the conversion rules multi levels by using the composition of graphs, and can reduce the number of rules. The developer can define and manage these rules efficiently.

  3. Aerosol analysis with the Coastal Zone Color Scanner - A simple method for including multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1989-01-01

    A method for studying aerosols over the ocean using Nimbus-7 CZCS data is proposed which circumvents having to perform radiative transfer computations involving the aerosol properties. The method is applied to the CZCS band 4 at 670 nm, and yields the total radiance (L sub t) backscattered from the top of a stratified atmosphere containing both stratospheric and tropospheric aerosols and the the Rayleigh scattered radiance (L sub r). The radiance which the aerosol would produce in the single scattering approximation is retrieved from (L sub t) - (L sub r) with an error of not greater than 5-7 percent.

  4. Remote monitoring of environmental particulate pollution - A problem in inversion of first-kind integral equations

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1975-01-01

    The determination of the microstructure, chemical nature, and dynamical evolution of scattering particulates in the atmosphere is considered. A description is given of indirect sampling techniques which can circumvent most of the difficulties associated with direct sampling techniques, taking into account methods based on scattering, extinction, and diffraction of an incident light beam. Approaches for reconstructing the particulate size distribution from the direct and the scattered radiation are discussed. A new method is proposed for determining the chemical composition of the particulates and attention is given to the relevance of methods of solution involving first kind Fredholm integral equations.

  5. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  6. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  7. Stress state estimation in multilayer support of vertical shafts, considering off-design cross-sectional deformation

    NASA Astrophysics Data System (ADS)

    Antsiferov, SV; Sammal, AS; Deev, PV

    2018-03-01

    To determine the stress-strain state of multilayer support of vertical shafts, including cross-sectional deformation of the tubing rings as against the design, the authors propose an analytical method based on the provision of the mechanics of underground structures and surrounding rock mass as the elements of an integrated deformable system. The method involves a rigorous solution of the corresponding problem of elasticity, obtained using the mathematical apparatus of the theory of analytic functions of a complex variable. The design method is implemented as a software program allowing multivariate applied computation. Examples of the calculation are given.

  8. Wiener-Hammerstein system identification - an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Naitali, Abdessamad; Giri, Fouad

    2016-01-01

    The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.

  9. A Particle Model for Prediction of Cement Infiltration of Cancellous Bone in Osteoporotic Bone Augmentation.

    PubMed

    Basafa, Ehsan; Murphy, Ryan J; Kutzer, Michael D; Otake, Yoshito; Armand, Mehran

    2013-01-01

    Femoroplasty is a potential preventive treatment for osteoporotic hip fractures. It involves augmenting mechanical properties of the femur by injecting Polymethylmethacrylate (PMMA) bone cement. To reduce the risks involved and maximize the outcome, however, the procedure needs to be carefully planned and executed. An important part of the planning system is predicting infiltration of cement into the porous medium of cancellous bone. We used the method of Smoothed Particle Hydrodynamics (SPH) to model the flow of PMMA inside porous media. We modified the standard formulation of SPH to incorporate the extreme viscosities associated with bone cement. Darcy creeping flow of fluids through isotropic porous media was simulated and the results were compared with those reported in the literature. Further validation involved injecting PMMA cement inside porous foam blocks - osteoporotic cancellous bone surrogates - and simulating the injections using our proposed SPH model. Millimeter accuracy was obtained in comparing the simulated and actual cement shapes. Also, strong correlations were found between the simulated and the experimental data of spreading distance (R(2) = 0.86) and normalized pressure (R(2) = 0.90). Results suggest that the proposed model is suitable for use in an osteoporotic femoral augmentation planning framework.

  10. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    PubMed

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  11. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  12. Mission Concepts and Operations for Asteroid Mitigation Involving Multiple Gravity Tractors

    NASA Technical Reports Server (NTRS)

    Foster, Cyrus; Bellerose, Julie; Jaroux, Belgacem; Mauro, David

    2012-01-01

    The gravity tractor concept is a proposed method to deflect an imminent asteroid impact through gravitational tugging over a time scale of years. In this study, we present mission scenarios and operational considerations for asteroid mitigation efforts involving multiple gravity tractors. We quantify the deflection performance improvement provided by a multiple gravity tractor campaign and assess its sensitivity to staggered launches. We next explore several proximity operation strategies to accommodate multiple gravity tractors at a single asteroid including formation-flying and mechanically-docked configurations. Finally, we utilize 99942 Apophis as an illustrative example to assess the performance of a multiple gravity tractor campaign.

  13. Use of the false discovery rate for evaluating clinical safety data.

    PubMed

    Mehrotra, Devan V; Heyse, Joseph F

    2004-06-01

    Clinical adverse experience (AE) data are routinely evaluated using between group P values for every AE encountered within each of several body systems. If the P values are reported and interpreted without multiplicity considerations, there is a potential for an excess of false positive findings. Procedures based on confidence interval estimates of treatment effects have the same potential for false positive findings as P value methods. Excess false positive findings can needlessly complicate the safety profile of a safe drug or vaccine. Accordingly, we propose a novel method for addressing multiplicity in the evaluation of adverse experience data arising in clinical trial settings. The method involves a two-step application of adjusted P values based on the Benjamini and Hochberg false discovery rate (FDR). Data from three moderate to large vaccine trials are used to illustrate our proposed 'Double FDR' approach, and to reinforce the potential impact of failing to account for multiplicity. This work was in collaboration with the late Professor John W. Tukey who coined the term 'Double FDR'.

  14. Estimating flood hydrographs and volumes for Alabama streams

    USGS Publications Warehouse

    Olin, D.A.; Atkins, J.B.

    1988-01-01

    The hydraulic design of highway drainage structures involves an evaluation of the effect of the proposed highway structures on lives, property, and stream stability. Flood hydrographs and associated flood volumes are useful tools in evaluating these effects. For design purposes, the Alabama Highway Department needs information on flood hydrographs and volumes associated with flood peaks of specific recurrence intervals (design floods) at proposed or existing bridge crossings. This report will provide the engineer with a method to estimate flood hydrographs, volumes, and lagtimes for rural and urban streams in Alabama with drainage areas less than 500 sq mi. Existing computer programs and methods to estimate flood hydrographs and volumes for ungaged streams have been developed in Georgia. These computer programs and methods were applied to streams in Alabama. The report gives detailed instructions on how to estimate flood hydrographs for ungaged rural or urban streams in Alabama with drainage areas less than 500 sq mi, without significant in-channel storage or regulations. (USGS)

  15. Providing a Science Base for the Evaluation of Tobacco Products

    PubMed Central

    Berman, Micah L.; Connolly, Greg; Cummings, K. Michael; Djordjevic, Mirjana V.; Hatsukami, Dorothy K.; Henningfield, Jack E.; Myers, Matthew; O'Connor, Richard J.; Parascandola, Mark; Rees, Vaughan; Rice, Jerry M.

    2015-01-01

    Objective Evidence-based tobacco regulation requires a comprehensive scientific framework to guide the evaluation of new tobacco products and health-related claims made by product manufacturers. Methods The Tobacco Product Assessment Consortium (TobPRAC) employed an iterative process involving consortia investigators, consultants, a workshop of independent scientists and public health experts, and written reviews in order to develop a conceptual framework for evaluating tobacco products. Results The consortium developed a four-phased framework for the scientific evaluation of tobacco products. The four phases addressed by the framework are: (1) pre-market evaluation, (2) pre-claims evaluation, (3) post-market activities, and (4) monitoring and re-evaluation. For each phase, the framework proposes the use of validated testing procedures that will evaluate potential harms at both the individual and population level. Conclusions While the validation of methods for evaluating tobacco products is an ongoing and necessary process, the proposed framework need not wait for fully validated methods to be used in guiding tobacco product regulation today. PMID:26665160

  16. A multi-state trajectory method for non-adiabatic dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Guohua, E-mail: taogh@pkusz.edu.cn

    2016-03-07

    A multi-state trajectory approach is proposed to describe nuclear-electron coupled dynamics in nonadiabatic simulations. In this approach, each electronic state is associated with an individual trajectory, among which electronic transition occurs. The set of these individual trajectories constitutes a multi-state trajectory, and nuclear dynamics is described by one of these individual trajectories as the system is on the corresponding state. The total nuclear-electron coupled dynamics is obtained from the ensemble average of the multi-state trajectories. A variety of benchmark systems such as the spin-boson system have been tested and the results generated using the quasi-classical version of the method showmore » reasonably good agreement with the exact quantum calculations. Featured in a clear multi-state picture, high efficiency, and excellent numerical stability, the proposed method may have advantages in being implemented to realistic complex molecular systems, and it could be straightforwardly applied to general nonadiabatic dynamics involving multiple states.« less

  17. Carcass Functions in Variational Calculations for Few-Body Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donchev, A.G.; Kalachev, S.A.; Kolesnikov, N.N.

    For variational calculations of molecular and nuclear systems involving a few particles, it is proposed to use carcass basis functions that generalize exponential and Gaussian trial functions. It is shown that the matrix elements of the Hamiltonian are expressed in a closed form for a Coulomb potential, as well as for other popular particle-interaction potentials. The use of such carcass functions in two-center Coulomb problems reduces, in relation to other methods, the number of terms in a variational expansion by a few orders of magnitude at a commensurate or even higher accuracy. The efficiency of the method is illustrated bymore » calculations of the three-particle Coulomb systems {mu}{mu}e, ppe, dde, and tte and the four-particle molecular systems H{sub 2} and HeH{sup +} of various isotopic composition. By considering the example of the {sub {lambda}}{sup 9}Be hypernucleus, it is shown that the proposed method can be used in calculating nuclear systems as well.« less

  18. Fault detection and classification in electrical power transmission system using artificial neural network.

    PubMed

    Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer

    2015-01-01

    This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.

  19. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  20. Probability genotype imputation method and integrated weighted lasso for QTL identification.

    PubMed

    Demetrashvili, Nino; Van den Heuvel, Edwin R; Wit, Ernst C

    2013-12-30

    Many QTL studies have two common features: (1) often there is missing marker information, (2) among many markers involved in the biological process only a few are causal. In statistics, the second issue falls under the headings "sparsity" and "causal inference". The goal of this work is to develop a two-step statistical methodology for QTL mapping for markers with binary genotypes. The first step introduces a novel imputation method for missing genotypes. Outcomes of the proposed imputation method are probabilities which serve as weights to the second step, namely in weighted lasso. The sparse phenotype inference is employed to select a set of predictive markers for the trait of interest. Simulation studies validate the proposed methodology under a wide range of realistic settings. Furthermore, the methodology outperforms alternative imputation and variable selection methods in such studies. The methodology was applied to an Arabidopsis experiment, containing 69 markers for 165 recombinant inbred lines of a F8 generation. The results confirm previously identified regions, however several new markers are also found. On the basis of the inferred ROC behavior these markers show good potential for being real, especially for the germination trait Gmax. Our imputation method shows higher accuracy in terms of sensitivity and specificity compared to alternative imputation method. Also, the proposed weighted lasso outperforms commonly practiced multiple regression as well as the traditional lasso and adaptive lasso with three weighting schemes. This means that under realistic missing data settings this methodology can be used for QTL identification.

  1. Development of validated high-performance thin layer chromatography for quantification of aristolochic acid in different species of the Aristolochiaceae family.

    PubMed

    Agrawal, Poonam; Laddha, Kirti

    2017-04-01

    This study was undertaken to isolate and quantify aristolochic acid in Aristolochia indica stem and Apama siliquosa root. Aristolochic acid is an important biomarker component present in the Aristolochiaceae family. The isolation method involved simple solvent extraction, precipitation and further purification, using recrystallization. The structure of the compound was confirmed using infrared spectroscopy, mass spectrometry and nuclear magnetic resonance. A specific and rapid high-performance thin layer chromatography (HPTLC) method was developed for analysis of aristolochic acid. The method involved separation on the silica gel 60 F 254 plates using the single solvent system of n-hexane: chloroform: methanol. The method showed good linear relationship in the range 0.4-2.0 μg/spot with r 2  = 0.998. The limit of detection and limit of quantification were 62.841 ng/spot and 209.47 ng/spot, respectively. The proposed validated HPTLC method was found to be an easy to use, accurate and convenient method that could be successfully used for standardization and quality assessment of herbal material as well as formulations containing different species of the Aristolochiaceae family. Copyright © 2016. Published by Elsevier B.V.

  2. Eulerian adaptive finite-difference method for high-velocity impact and penetration problems

    NASA Astrophysics Data System (ADS)

    Barton, P. T.; Deiterding, R.; Meiron, D.; Pullin, D.

    2013-05-01

    Owing to the complex processes involved, faithful prediction of high-velocity impact events demands a simulation method delivering efficient calculations based on comprehensively formulated constitutive models. Such an approach is presented herein, employing a weighted essentially non-oscillatory (WENO) method within an adaptive mesh refinement (AMR) framework for the numerical solution of hyperbolic partial differential equations. Applied widely in computational fluid dynamics, these methods are well suited to the involved locally non-smooth finite deformations, circumventing any requirement for artificial viscosity functions for shock capturing. Application of the methods is facilitated through using a model of solid dynamics based upon hyper-elastic theory comprising kinematic evolution equations for the elastic distortion tensor. The model for finite inelastic deformations is phenomenologically equivalent to Maxwell's model of tangential stress relaxation. Closure relations tailored to the expected high-pressure states are proposed and calibrated for the materials of interest. Sharp interface resolution is achieved by employing level-set functions to track boundary motion, along with a ghost material method to capture the necessary internal boundary conditions for material interactions and stress-free surfaces. The approach is demonstrated for the simulation of high velocity impacts of steel projectiles on aluminium target plates in two and three dimensions.

  3. Conceptual design of industrial process displays.

    PubMed

    Pedersen, C R; Lind, M

    1999-11-01

    Today, process displays used in industry are often designed on the basis of piping and instrumentation diagrams without any method of ensuring that the needs of the operators are fulfilled. Therefore, a method for a systematic approach to the design of process displays is needed. This paper discusses aspects of process display design taking into account both the designer's and the operator's points of view. Three aspects are emphasized: the operator tasks, the display content and the display form. The distinction between these three aspects is the basis for proposing an outline for a display design method that matches the industrial practice of modular plant design and satisfies the needs of reusability of display design solutions. The main considerations in display design in the industry are to specify the operator's activities in detail, to extract the information the operators need from the plant design specification and documentation, and finally to present this information. The form of the display is selected from existing standardized display elements such as trend curves, mimic diagrams, ecological interfaces, etc. Further knowledge is required to invent new display elements. That is, knowledge about basic visual means of presenting information and how humans perceive and interpret these means and combinations. This knowledge is required in the systematic selection of graphical items for a given display content. The industrial part of the method is first illustrated in the paper by a simple example from a plant with batch processes. Later the method is applied to develop a supervisory display for a condenser system in a nuclear power plant. The differences between the continuous plant domain of power production and the batch processes from the example are analysed and broad categories of display types are proposed. The problems involved in specification and invention of a supervisory display are analysed and conclusions from these problems are made. It is concluded that the design method proposed provides a framework for the progress of the display design and is useful in pin-pointing the actual problems. The method was useful in reducing the number of existing displays that could fulfil the requirements of the supervision task. The method provided at the same time a framework for dealing with the problems involved in inventing new displays based on structured analysis. However the problems in a systematic approach to display invention still need consideration.

  4. High-resolution imaging using a wideband MIMO radar system with two distributed arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi

    2010-05-01

    Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.

  5. Na Ala Hele (Trails for Walking).

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Planning and Economic Development, Honolulu.

    This proposal for the development of a system of administering hiking trails in the state of Hawaii when such trails would involve various public and private jurisdictions emphasizes three elements: (a) proposing means of administration involving multiple jurisdictions; (b) demonstrating by means of a proposed project on the west coast of the Big…

  6. Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics.

    PubMed

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-06-01

    Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors

    NASA Astrophysics Data System (ADS)

    He, Xingyu; Tong, Ningning; Hu, Xiaowei

    2018-01-01

    Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.

  8. Projecting adverse event incidence rates using empirical Bayes methodology.

    PubMed

    Ma, Guoguang Julie; Ganju, Jitendra; Huang, Jing

    2016-08-01

    Although there is considerable interest in adverse events observed in clinical trials, projecting adverse event incidence rates in an extended period can be of interest when the trial duration is limited compared to clinical practice. A naïve method for making projections might involve modeling the observed rates into the future for each adverse event. However, such an approach overlooks the information that can be borrowed across all the adverse event data. We propose a method that weights each projection using a shrinkage factor; the adverse event-specific shrinkage is a probability, based on empirical Bayes methodology, estimated from all the adverse event data, reflecting evidence in support of the null or non-null hypotheses. Also proposed is a technique to estimate the proportion of true nulls, called the common area under the density curves, which is a critical step in arriving at the shrinkage factor. The performance of the method is evaluated by projecting from interim data and then comparing the projected results with observed results. The method is illustrated on two data sets. © The Author(s) 2013.

  9. Advantages and limitations of common testing methods for antioxidants.

    PubMed

    Amorati, R; Valgimigli, L

    2015-05-01

    Owing to the importance of antioxidants in the protection of both natural and man-made materials, a large variety of testing methods have been proposed and applied. These include methods based on inhibited autoxidation studies, which are better followed by monitoring the kinetics of oxygen consumption or of the formation of hydroperoxides, the primary oxidation products. Analytical determination of secondary oxidation products (e.g. carbonyl compounds) has also been used. The majority of testing methods, however, do not involve substrate autoxidation. They are based on the competitive bleaching of a probe (e.g. ORAC assay, β-carotene, crocin bleaching assays, and luminol assay), on reaction with a different probe (e.g. spin-trapping and TOSC assay), or they are indirect methods based on the reduction of persistent radicals (e.g. galvinoxyl, DPPH and TEAC assays), or of inorganic oxidizing species (e.g. FRAP, CUPRAC and Folin-Ciocalteu assays). Yet other methods are specific for preventive antioxidants. The relevance, advantages, and limitations of these methods are critically discussed, with respect to their chemistry and the mechanisms of antioxidant activity. A variety of cell-based assays have also been proposed, to investigate the biological activity of antioxidants. Their importance and critical aspects are discussed, along with arguments for the selection of the appropriate testing methods according to the different needs.

  10. Security aspects of space operations data

    NASA Technical Reports Server (NTRS)

    Schmitz, Stefan

    1993-01-01

    This paper deals with data security. It identifies security threats to European Space Agency's (ESA) In Orbit Infrastructure Ground Segment (IOI GS) and proposes a method of dealing with its complex data structures from the security point of view. It is part of the 'Analysis of Failure Modes, Effects Hazards and Risks of the IOI GS for Operations, including Backup Facilities and Functions' carried out on behalf of the European Space Operations Center (ESOC). The security part of this analysis has been prepared with the following aspects in mind: ESA's large decentralized ground facilities for operations, the multiple organizations/users involved in the operations and the developments of ground data systems, and the large heterogeneous network structure enabling access to (sensitive) data which does involve crossing organizational boundaries. An IOI GS data objects classification is introduced to determine the extent of the necessary protection mechanisms. The proposal of security countermeasures is oriented towards the European 'Information Technology Security Evaluation Criteria (ITSEC)' whose hierarchically organized requirements can be directly mapped to the security sensitivity classification.

  11. What is a new drug worth? An innovative model for performance-based pricing.

    PubMed

    Dranitsaris, G; Dorward, K; Owens, R C; Schipper, H

    2015-05-01

    This article focuses on a novel method to derive prices for new pharmaceuticals by making price a function of drug performance. We briefly review current models for determining price for a new product and discuss alternatives that have historically been favoured by various funding bodies. The progressive approach to drug pricing, proposed herein, may better address the views and concerns of multiple stakeholders in a developed healthcare system by acknowledging and incorporating input from disparate parties via comprehensive and successive negotiation stages. In proposing a valid construct for performance-based pricing, the following model seeks to achieve several crucial objectives: earlier and wider access to new treatments; improved transparency in drug pricing; multi-stakeholder involvement through phased pricing negotiations; recognition of innovative product performance and latent changes in value; an earlier and more predictable return for developers without sacrificing total return on investment (ROI); more involved and informed risk sharing by the end-user. © 2014 John Wiley & Sons Ltd.

  12. Study of Thermal Electrical Modified Etching for Glass and Its Application in Structure Etching

    PubMed Central

    Zhan, Zhan; Li, Wei; Yu, Lingke; Wang, Lingyun; Sun, Daoheng

    2017-01-01

    In this work, an accelerating etching method for glass named thermal electrical modified etching (TEM etching) is investigated. Based on the identification of the effect in anodic bonding, a novel method for glass structure micromachining is proposed using TEM etching. To validate the method, TEM-etched glasses are prepared and their morphology is tested, revealing the feasibility of the new method for micro/nano structure micromachining. Furthermore, two kinds of edge effect in the TEM and etching processes are analyzed. Additionally, a parameter study of TEM etching involving transferred charge, applied pressure, and etching roughness is conducted to evaluate this method. The study shows that TEM etching is a promising manufacture method for glass with low process temperature, three-dimensional self-control ability, and low equipment requirement. PMID:28772521

  13. Exact posterior computation in non-conjugate Gaussian location-scale parameters models

    NASA Astrophysics Data System (ADS)

    Andrade, J. A. A.; Rathie, P. N.

    2017-12-01

    In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.

  14. Kirchhoff and Ohm in action: solving electric currents in continuous extended media

    NASA Astrophysics Data System (ADS)

    Dolinko, A. E.

    2018-03-01

    In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.

  15. Inferior heel pain in soccer players: a retrospective study with a proposal for guidelines of treatment

    PubMed Central

    Saggini, Raoul; Migliorini, Maurizio; Carmignano, Simona Maria; Ancona, Emilio; Russo, Chiara; Bellomo, Rosa Grazia

    2018-01-01

    Background The cause of heel pain among soccer players is multifactorial and is related to repetitive microtrauma due to impact forces involving technical moves, but also the playground, the exercise mode, the recovery time, the climatic conditions and the footwear used. Aim To investigate the aetiology of plantar heel pain of soccer players with the objective of proposing an example of guidelines for treatment. Methods We investigated the prevalence and characteristics of inferior heel pain of 1473 professional, semiprofessional and amateur players. All evaluated subjects were submitted to a specific rehabilitation protocol that involved advanced physical therapies and viscoelastic insoles depending on the aetiology of pain. Results Clinical and instrumental examinations revealed that 960 of 1473 athletes had inferior heel pain. These patients were divided into seven groups based on aetiology: sural nerve compression, abductor digiti minimi compression, atrophy and inflammation of the fat pad, plantar fasciitis, stress injury of the heel spur, stress fracture of the heel bone and heel spur. The proposed rehabilitation treatment aims for a reduction of pain and an early return to sports, with excellent results. Conclusions According to what was observed in the present study, related also to the specific treatment of inferior heel pain, and considering the technological progress achieved in recent years, we can now propose an integrated therapeutic approach to treatment of heel pain, properly differentiated according to specific aetiology. PMID:29527319

  16. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A totalmore » of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet-based EFCM clustering model was introduced toward noise reduction and detail preservation. The proposed method improves the overall US image quality, which in turn could affect the decision-making on whether additional imaging and/or intervention is needed.« less

  17. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  18. Registration using natural features for augmented reality systems.

    PubMed

    Yuan, M L; Ong, S K; Nee, A Y C

    2006-01-01

    Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have been conducted to validate the performance of this proposed method.

  19. Focusing optical waves with a rotationally symmetric sharp-edge aperture

    NASA Astrophysics Data System (ADS)

    Hu, Yanwen; Fu, Shenhe; Li, Zhen; Yin, Hao; Zhou, Jianying; Chen, Zhenqiang

    2018-04-01

    While there has been various kinds of patterned structures proposed for wave focusing, these patterned structures usually involve complicated lithographic techniques since the element size of the patterned structures should be precisely controlled in microscale or even nanoscale. Here we propose a new and straightforward method for focusing an optical plane wave in free space with a rotationally symmetric sharp-edge aperture. The focusing phenomenon of wave is realized by superposition of a portion of the higher-order symmetric plane waves generated from the sharp edges of the apertures, in contrast to previously focusing techniques which usually depend on a curved phase. We demonstrate both experimentally and theoretically the focusing effect with a series of apertures having different rotational symmetry, and find that the intensity of the hotspots could be controlled by the symmetric strength of the sharp-edge apertures. The presented results would advance the conventional wisdom that light would diffract in all directions and become expanding when it propagates through an aperture. The proposed method is easy to be processed, and might open potential applications in interferometry, image, and superresolution.

  20. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    PubMed

    Morimura, Tetsuro; Uchibe, Eiji; Yoshimoto, Junichiro; Peters, Jan; Doya, Kenji

    2010-02-01

    Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate gamma for the value functions close to 1, these algorithms do not permit gamma to be set exactly at gamma = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting gamma = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.

  1. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  2. A proposal for measuring the degree of public health–sensitivity of patent legislation in the context of the WTO TRIPS Agreement

    PubMed Central

    Chaves, Gabriela Costa

    2007-01-01

    Abstract Objective This study aims to propose a framework for measuring the degree of public health-sensitivity of patent legislation reformed after the World Trade Organization’s TRIPS (Trade-Related Aspects of Intellectual Property Rights) Agreement entered into force. Methods The methodology for establishing and testing the proposed framework involved three main steps:(1) a literature review on TRIPS flexibilities related to the protection of public health and provisions considered “TRIPS-plus”; (2) content validation through consensus techniques (an adaptation of Delphi method); and (3) an analysis of patent legislation from nineteen Latin American and Caribbean countries. Findings The results show that the framework detected relevant differences in countries’ patent legislation, allowing for country comparisons. Conclusion The framework’s potential usefulness in monitoring patent legislation changes arises from its clear parameters for measuring patent legislation’s degree of health sensitivity. Nevertheless, it can be improved by including indicators related to government and organized society initiatives that minimize free-trade agreements’ negative effects on access to medicines. PMID:17242758

  3. Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors

    PubMed Central

    Deng, Xinyang; Jiang, Wen; Zhang, Jiandong

    2017-01-01

    The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156

  4. Cutibacterium acnes molecular typing: time to standardize the method.

    PubMed

    Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S

    2018-03-12

    The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  5. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  6. RFDT: A Rotation Forest-based Predictor for Predicting Drug-Target Interactions Using Drug Structure and Protein Sequence Information.

    PubMed

    Wang, Lei; You, Zhu-Hong; Chen, Xing; Yan, Xin; Liu, Gang; Zhang, Wei

    2018-01-01

    Identification of interaction between drugs and target proteins plays an important role in discovering new drug candidates. However, through the experimental method to identify the drug-target interactions remain to be extremely time-consuming, expensive and challenging even nowadays. Therefore, it is urgent to develop new computational methods to predict potential drugtarget interactions (DTI). In this article, a novel computational model is developed for predicting potential drug-target interactions under the theory that each drug-target interaction pair can be represented by the structural properties from drugs and evolutionary information derived from proteins. Specifically, the protein sequences are encoded as Position-Specific Scoring Matrix (PSSM) descriptor which contains information of biological evolutionary and the drug molecules are encoded as fingerprint feature vector which represents the existence of certain functional groups or fragments. Four benchmark datasets involving enzymes, ion channels, GPCRs and nuclear receptors, are independently used for establishing predictive models with Rotation Forest (RF) model. The proposed method achieved the prediction accuracy of 91.3%, 89.1%, 84.1% and 71.1% for four datasets respectively. In order to make our method more persuasive, we compared our classifier with the state-of-theart Support Vector Machine (SVM) classifier. We also compared the proposed method with other excellent methods. Experimental results demonstrate that the proposed method is effective in the prediction of DTI, and can provide assistance for new drug research and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  8. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images.

    PubMed

    Medhi, Jyoti Prakash; Dandapat, Samarendra

    2016-07-01

    Prolonged diabetes causes severe damage to the vision through leakage of blood and blood constituents over the retina. The effect of the leakage becomes more threatening when these abnormalities involve the macula. This condition is known as diabetic maculopathy and it leads to blindness, if not treated in time. Early detection and proper diagnosis can help in preventing this irreversible damage. To achieve this, the possible way is to perform retinal screening at regular intervals. But the ratio of ophthalmologists to patients is very small and the process of evaluation is time consuming. Here, the automatic methods for analyzing retinal/fundus images prove handy and help the ophthalmologists to screen at a faster rate. Motivated from this aspect, an automated method for detection and analysis of diabetic maculopathy is proposed in this work. The method is implemented in two stages. The first stage involves preprocessing required for preparing the image for further analysis. During this stage the input image is enhanced and the optic disc is masked to avoid false detection during bright lesion identification. The second stage is maculopathy detection and its analysis. Here, the retinal lesions including microaneurysms, hemorrhages and exudates are identified by processing the green and hue plane color images. The macula and the fovea locations are determined using intensity property of processed red plane image. Different circular regions are thereafter marked in the neighborhood of the macula. The presence of lesions in these regions is identified to confirm positive maculopathy. Later, the information is used for evaluating its severity. The principal advantage of the proposed algorithm is, utilization of the relation of blood vessels with optic disc and macula, which enhances the detection process. Proper usage of various color plane information sequentially enables the algorithm to perform better. The method is tested on various publicly available databases consisting of both normal and maculopathy images. The algorithm detects fovea with an accuracy of 98.92% when applied on 1374 images. The average specificity and sensitivity of the proposed method for maculopathy detection are obtained as 98.05% and 98.86% respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Using a Mixed Model to Evaluate Job Satisfaction in High-Tech Industries

    PubMed Central

    Tsai, Sang-Bing; Huang, Chih-Yao; Wang, Cheng-Kuang; Chen, Quan; Pan, Jingzhou; Wang, Ge; Wang, Jingan; Chin, Ta-Chia; Chang, Li-Chung

    2016-01-01

    R&D professionals are the impetus behind technological innovation, and their competitiveness and capability drive the growth of a company. However, high-tech industries have a chronic shortage of such indispensable professionals. Accordingly, reducing R&D personnel turnover has become a major human resource management challenge facing innovative companies. This study combined importance–performance analysis (IPA) with the decision-making trial and evaluation laboratory (DEMATEL) method to propose an IPA–DEMATEL model. Establishing this model involved three steps. First, an IPA was conducted to measure the importance of and satisfaction gained from job satisfaction criteria. Second, the DEMATEL method was used to determine the causal relationships of and interactive influence among the criteria. Third, a criteria model was constructed to evaluate job satisfaction of high-tech R&D personnel. On the basis of the findings, managerial suggestions are proposed. PMID:27139697

  10. Generation algorithm of craniofacial structure contour in cephalometric images

    NASA Astrophysics Data System (ADS)

    Mondal, Tanmoy; Jain, Ashish; Sardana, H. K.

    2010-02-01

    Anatomical structure tracing on cephalograms is a significant way to obtain cephalometric analysis. Computerized cephalometric analysis involves both manual and automatic approaches. The manual approach is limited in accuracy and repeatability. In this paper we have attempted to develop and test a novel method for automatic localization of craniofacial structure based on the detected edges on the region of interest. According to the grey scale feature at the different region of the cephalometric images, an algorithm for obtaining tissue contour is put forward. Using edge detection with specific threshold an improved bidirectional contour tracing approach is proposed by an interactive selection of the starting edge pixels, the tracking process searches repetitively for an edge pixel at the neighborhood of previously searched edge pixel to segment images, and then craniofacial structures are obtained. The effectiveness of the algorithm is demonstrated by the preliminary experimental results obtained with the proposed method.

  11. Synthesis procedure optimization and characterization of europium (III) tungstate nanoparticles

    NASA Astrophysics Data System (ADS)

    Rahimi-Nasrabadi, Mehdi; Pourmortazavi, Seied Mahdi; Ganjali, Mohammad Reza; Reza Banan, Ali; Ahmadi, Farhad

    2014-09-01

    Taguchi robust design as a statistical method was applied for the optimization of process parameters in order to tunable, facile and fast synthesis of europium (III) tungstate nanoparticles. Europium (III) tungstate nanoparticles were synthesized by a chemical precipitation reaction involving direct addition of europium ion aqueous solution to the tungstate reagent solved in an aqueous medium. Effects of some synthesis procedure variables on the particle size of europium (III) tungstate nanoparticles were studied. Analysis of variance showed the importance of controlling tungstate concentration, cation feeding flow rate and temperature during preparation of europium (III) tungstate nanoparticles by the proposed chemical precipitation reaction. Finally, europium (III) tungstate nanoparticles were synthesized at the optimum conditions of the proposed method. The morphology and chemical composition of the prepared nano-material were characterized by means of X-ray diffraction, scanning electron microscopy, transmission electron microscopy, FT-IR spectroscopy and fluorescence.

  12. Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting

    PubMed Central

    Ming-jun, Deng; Shi-ru, Qu

    2015-01-01

    Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258

  13. Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2016-06-01

    Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.

  14. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  15. Localized mold heating with the aid of selective induction for injection molding of high aspect ratio micro-features

    NASA Astrophysics Data System (ADS)

    Park, Keun; Lee, Sang-Ik

    2010-03-01

    High-frequency induction is an efficient, non-contact means of heating the surface of an injection mold through electromagnetic induction. Because the procedure allows for the rapid heating and cooling of mold surfaces, it has been recently applied to the injection molding of thin-walled parts or micro/nano-structures. The present study proposes a localized heating method involving the selective use of mold materials to enhance the heating efficiency of high-frequency induction heating. For localized induction heating, a composite injection mold of ferromagnetic material and paramagnetic material is used. The feasibility of the proposed heating method is investigated through numerical analyses in terms of its heating efficiency for localized mold surfaces and in terms of the structural safety of the composite mold. The moldability of high aspect ratio micro-features is then experimentally compared under a variety of induction heating conditions.

  16. Synchronization of Switched Neural Networks With Communication Delays via the Event-Triggered Control.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Chen, Michael Z Q; Huang, Tingwen

    2017-10-01

    This paper addresses the issue of synchronization of switched delayed neural networks with communication delays via event-triggered control. For synchronizing coupled switched neural networks, we propose a novel event-triggered control law which could greatly reduce the number of control updates for synchronization tasks of coupled switched neural networks involving embedded microprocessors with limited on-board resources. The control signals are driven by properly defined events, which depend on the measurement errors and current-sampled states. By using a delay system method, a novel model of synchronization error system with delays is proposed with the communication delays and event-triggered control in the unified framework for coupled switched neural networks. The criteria are derived for the event-triggered synchronization analysis and control synthesis of switched neural networks via the Lyapunov-Krasovskii functional method and free weighting matrix approach. A numerical example is elaborated on to illustrate the effectiveness of the derived results.

  17. Moral deliberation and nursing ethics cases: elements of a methodological proposal.

    PubMed

    Schneider, Dulcinéia Ghizoni; Ramos, Flávia Regina Souza

    2012-11-01

    A qualitative study with an exploratory, descriptive and documentary design that was conducted with the objective of identifying the elements to constitute a method for the analysis of accusations of and proceedings for professional ethics infringements. The method is based on underlying elements identified inductively during analysis of professional ethics hearings judged by and filed in the archives of the Regional Nursing Board of Santa Catarina, Brazil, between 1999 and 2007. The strategies developed were based on the results of an analysis of the findings of fact (occurrences/infractions, causes and outcomes) contained in the records of 128 professional ethics hearings and on the structural elements (statements, rules and practices) identified in five example professional ethics cases. The strategies suggested for evaluating accusations of ethics infringements and the procedures involved in deliberating on ethics hearings constitute a generic proposal that will require adaptation to the context of specific professional ethics accusations.

  18. Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.

    PubMed

    Deng, Ming-jun; Qu, Shi-ru

    2015-01-01

    Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.

  19. 14 CFR 1230.118 - Applications and proposals lacking definite plans for involvement of human subjects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Applications and proposals lacking definite plans for involvement of human subjects. 1230.118 Section 1230.118 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION PROTECTION OF HUMAN SUBJECTS § 1230.118 Applications and proposals...

  20. 14 CFR 1230.118 - Applications and proposals lacking definite plans for involvement of human subjects.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Applications and proposals lacking definite plans for involvement of human subjects. 1230.118 Section 1230.118 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION PROTECTION OF HUMAN SUBJECTS § 1230.118 Applications and proposals...

Top